Sei sulla pagina 1di 226

HSE

Health & Safety


Executive
Probabilistic methods: Uses and
abuses in structural integrity
Prepared by BOMEL Limited
for the Health and Safety Executive
CONTRACT RESEARCH REPORT
398/2001
HSE
Health & Safety
Executive
Probabilistic methods: Uses and
abuses in structural integrity
BOMEL Limited
Ledger House
Forest Green Road
Fifield, Maidenhead
Berkshire SL6 2NR
United Kingdom
This report concerns the uses and abuses of probabilistic methods in structural integrity, in particular
the design and assessment of pressure systems. Probabilistic methods are now used widely in the
assessment and design of structures in many industries. For a number of years these methods have
been applied to offshore pipelines and, following work by a number of companies in this country and
worldwide, they are being used in the design and reassessment/requalification of major gas trunklines
onshore.
Although probabilistic methods have been available for a number of years and are widely used, there is
still a great deal of confusion which arises from vague language, ill-defined and inconsistent
terminology, and misinterpretation often present in published material on the topic. This is perhaps the
main reason for misuse in some applications of the methods. The report aims to define the terms more
clearly, and to outline the basic principles of the approach.
The report reviews the development of structural design methods from the earliest building methods,
through limit state and partial factor design methods, to probabilistic analysis, and considers their
applicability to assessing the integrity of pressure systems. Methods of probabilistic analysis are
discussed and key references are identified for further information. The uses of risk and reliability
analysis are also discussed, and many of the concerns that are often expressed with the use of
reliability and risk analysis methods are examined.
Guidelines are presented for regulators and industry to assist in assessing work that incorporates risk
and reliability-based analysis and arguments. The guidelines may also be of use to consultants in how
to undertake and present risk and reliability analysis.
This report and the work it describes was funded by the Health and Safety Executive (HSE). Its
contents, including any opinions and/or conclusions expressed, are those of the author(s) alone and do
not necessarily reflect HSE policy.
HSE BOOKS
ii
Crown copyright 2001
Applications for reproduction should be made in writing to:
Copyright Unit, Her Majestys Stationery Office,
St Clements House, 2-16 Colegate, Norwich NR3 1BQ
First published 2001
ISBN 0 7176 2238 X
All rights reserved. No part of this publication may be
reproduced, stored in a retrieval system, or transmitted
in any form or by any means (electronic, mechanical,
photocopying, recording or otherwise) without the prior
written permission of the copyright owner.
iii
CONTENTS
Page No.
EXECUTIVE SUMMARY vii
1. INTRODUCTION 1
1.1 INTRODUCTION 1
1.2 PROJECT BACKGROUND 1
1.3 SCOPE AND OBJECTIVES OF THE REPORT 1
2. HISTORICAL BACKGROUND TO STRUCTURAL DESIGN 3
2.1 SUMMARY 3
2.2 DESIGN METHODS 4
2.2.1 Design by Geometric Ratio 4
2.2.2 Load Factor Design 5
2.2.3 Allowable Stress Design 5
2.2.4 Limit State Design 6
2.3 CODES AND STANDARDS 7
2.3.1 The Historical Development of Codes and Standards 7
2.3.2 Limit State Design Codes 10
2.3.3 Reliability Methods in Codes 10
2.3.4 Model Code 11
2.4 RISK AND RELIABILITY METHODS 11
3. DETERMINISTIC AND RELIABILITY-BASED DESIGN AND
ASSESSMENT PROCEDURES 13
3.1 SUMMARY 13
3.2 CAUSES OF STRUCTURAL FAILURE AND RISK REDUCTION
MEASURES 13
3.3 FUNDAMENTAL DESIGN REQUIREMENTS 15
3.4 DETERMINISTIC DESIGN AND ASSESSMENT PROCEDURES 16
3.4.1 Allowable, Permissible or Working-Stress Design, and Load Factor
Approaches 17
3.4.2 Partial Factor, Partial Coefficient or Load and Resistance Factor Design
Approaches 19
3.4.3 Characteristic Values in Limit State Design 26
3.5 PROBABILISTIC DESIGN AND ASSESSMENT PROCEDURES 26
3.5.1 UK Safety Legislation 28
3.6 TREATMENT OF GROSS OR HUMAN ERROR 29
3.6.1 Control of Errors in Deterministic Design 30
3.6.2 Treatment of Errors in Probabilistic Assessment 31
4. STRUCTURAL RELIABILITY THEORY, UNCERTAINTY MODELLING
AND THE INTERPRETATION OF PROBABILITY 33
4.1 SUMMARY 33
4.2 FREQUENTIST VERSUS BAYESIAN INTERPRETATION OF
EVALUATED PROBABILITIES 33
4.2.1 Frequentist Interpretation 33
iv
4.2.2 Bayesian or Degree-of-Belief Interpretation 34
4.3 RELATIONSHIP BETWEEN RISK ANALYSIS AND RELIABILITY
ANALYSIS 35
4.4 OBJECTIVE OF STRUCTURAL RELIABILITY ANALYSIS 36
4.5 TYPES OF UNCERTAINTY 37
4.5.1 Aleatoric Uncertainties 38
4.5.2 Epistemic Uncertainties 39
4.6 STRUCTURAL RELIABILITY THEORY 41
5. METHODS OF PROBABILISTIC ANALYSIS 45
5.1 SUMMARY 45
5.2 STRUCTURAL RELIABILITY ANALYSIS PROCEDURE 45
5.3 HAZARD ANALYSIS/FAILURE MODE AND EFFECT ANALYSIS 46
5.4 FAULT/EVENT TREE ANALYSIS 46
5.4.1 Event Trees 46
5.4.2 Fault Trees 47
5.5 STRUCTURAL SYSTEM ANALYSIS 48
5.5.1 Series System 48
5.5.2 Parallel System 48
5.6 FAILURE FUNCTION MODELLING 49
5.6.1 The Time Element 50
5.7 BASIC VARIABLE MODELLING 50
5.8 METHODS OF COMPUTING COMPONENT RELIABILITIES 53
5.8.1 Mean Value Estimates 53
5.8.2 First-Order Second-Moment Methods - FORM 54
5.8.3 Second-Order Reliability Methods - SORM 56
5.8.4 Monte Carlo Simulation Methods 56
5.9 COMBINATION OF EVENTS 57
5.9.1 Component and System Reliability Analysis 58
5.10 TIME-DEPENDANT ANALYSIS 59
5.10.1 Annual Reliability 59
5.10.2 Lifetime Reliability 60
5.10.3 Conditional Reliability Given a Service History 61
5.11 ASSESSMENT OF TARGET RELIABILITY 63
5.11.1 Societal Values 63
5.11.2 Comparison with Existing Practice 63
5.11.3 Cost-Benefit Analysis 64
5.11.4 Targets for Pipelines 65
5.12 RISK ASSESSMENT 66
6. USES OF RELIABILITY ANALYSIS AND PROBABILISTIC METHODS 69
6.1 SUMMARY 69
6.2 SAFETY FACTOR CALIBRATION 69
6.3 PROBABILISTIC DESIGN 70
6.4 DECISION ANALYSIS 70
6.5 RISK AND RELIABILITY-BASED INSPECTION, REPAIR AND
MAINTENANCE SCHEMES 72
6.5.1 Qualitative Indexing Systems 72
6.5.2 Quantitative Risk Systems 74
v
7. REQUIREMENTS FOR PROBABILISTIC ANALYSIS OF PRESSURE
VESSELS AND PIPELINE SYSTEMS 75
7.1 SUMMARY 75
7.2 BASIC DESIGN DATA 75
7.3 DEFINITION OF FAILURE MODES, LIMIT STATES AND TARGET
RELIABILITIES 75
7.4 PROBABILITY ANALYSIS 76
7.4.1 Assessment of Hazard Likelihood of Occurrence 76
7.4.2 Failure Models 76
7.4.3 Basic Variable Statistics 76
7.4.4 Component and System Reliability Analysis 76
7.5 CONSEQUENCE MODELS 76
7.5.1 Fire and Blast Analysis Results 76
7.5.2 Economic Considerations 77
7.5.3 Environmental Considerations 77
7.5.4 Life-Safety Considerations 77
7.6 INSPECTION METHODS, COSTS AND MEASUREMENT
UNCERTAINTY 77
7.7 MAINTENANCE AND REPAIR METHODS, AND COSTS 77
8. CONCERNS WITH STRUCTURAL RELIABILITY AND RISK ANALYSIS 79
8.1 SUMMARY 79
8.2 CONCERNS WITH STRUCTURAL RELIABILITY ANALYSIS 79
8.2.1 Inclusion of Model Uncertainty 79
8.2.2 The Tail Sensitivity Problem 80
8.2.3 Small Failure Probabilities 80
8.2.4 Validation 80
8.2.5 Notional Versus True Interpretation 81
8.3 CONCERNS WITH RISK ASSESSMENT 81
8.3.1 Generic Data 81
8.3.2 Risk Aversion 82
8.3.3 Numerical Uncertainty and Reproducibility 82
8.3.4 Deterministic Consequence Models 83
8.3.5 The pro forma approach 83
8.3.6 Completeness 83
9. GUIDELINES FOR RELIABILITY AND RISK ANALYSIS 85
9.1 SUMMARY 85
9.2 GUIDELINES 85
10. GLOSSARY 91
11. REFERENCES 93

ANNEX A CASE STUDY 1 PIPELINE DESIGN PRESSURE UPGRADE 99
ANNEX B CASE STUDY 2 OVERVIEW OF DRAFT EUROCODE prEN 13445-3 -
Unfired Pressure Vessels Part 3: Design 165
Printed and published by the Health and Safety Executive
C30 1/98
vi
vii
HEALTH AND SAFETY EXECUTIVE
PROBABILISTIC METHODS: USES AND ABUSES IN
STRUCTURAL INTEGRITY

Executive Summary

This report concerns the uses and abuses of probabilistic methods in structural integrity, in
particular the design and assessment of pressure systems. Probabilistic methods are now used
widely in the assessment and design of structures in many industries. For a number of years
these methods have been applied to offshore pipelines, and following work by BG Technology
(now Advantica) and a number of other companies in this country and worldwide, they are
being used in the design and reassessment/requalification of major gas trunklines onshore.
Although probabilistic methods have been available for a number of years and are widely used,
there is still a great deal of confusion which arises from vague language, ill-defined and
inconsistent terminology, and misinterpretation often present in published material on the topic.
This is perhaps the main reason for misuse in some applications of the methods. The report
aims to define the terms more clearly, and to outline the basic principles of the approach.
The report reviews the development of structural design methods form the earliest building
methods, through limit state and partial factor design methods, to probabilistic analysis, and
considers their applicability to assessing the integrity of pressure systems. Methods of
probabilistic analysis are discussed and key references are identified for further information.
The uses of risk and reliability analysis are also discussed, and many of the concerns that are
often expressed with the use of reliability and risk analysis methods are examined.
Guidelines are presented for regulators and industry to assist in assessing work that incorporates
risk and reliability-based analysis and arguments. The guidelines may also be of use to
consultants in how to undertake and present risk and reliability analysis.
Two case studies concerning the application of probabilistic methods have been examined, and
are presented in Annexes, these are:
Probabilistic analysis similar to that used to justify an upgrade in the pipeline pressure
design factor
A review of the Draft Eurocode prEN 13445-3 for pressure vessel design.
The case studies present a trial application of aspects of the guidelines, and highlight some
abuses with the application of probabilistic methods.
The first case study involved reliability analysis to assess the failure probability of a pipeline
due to dents or gouges as a result of third party interference. The aim of the study was to show
that an increase in pipeline operating pressure would not significantly affect the probability of
pipeline failure. The study illustrates the use of a number of reliability analysis techniques. By
looking at a range of pressures the case study uncovered unexpected differences between first-
and second-order reliability results, and found sensitivity in the governing failure modes.
viii
The second case study concerns the draft Eurocode which is introducing alternative methods for
pressure vessel design. The traditional approach, known as design by formula (DBF) is based
on a prescriptive approach using design formulae incorporating safety factors. The alternative
techniques are design by analysis (DBA) and experimental techniques. The case study
examines the different philosophies of the DBF and DBA approaches, and has identified
possible changes in safety levels between different design methods. The study also found out
that safety factors in the draft code have not been calibrated on a consistent basis, but have been
extracted from two existing codes the Danish pressure vessel code and Eurocode 3 for general
steel design.

1
1. INTRODUCTION
1.1 INTRODUCTION
Probabilistic structural analysis may be defined as [1]:
the art of formulating a mathematical model within which one can ask and
get an answer to the question What is the probability that a structure
behaves in a specified way, given that one or more of its material properties
are of a random or incompletely known nature, and/or that the actions on
the structure in some respects have random or incompletely known
properties?
With advances in design methods and the advent of the goal-setting regime, probabilistic
analysis has become more than a research topic. This report addresses some of the uses of
probabilistic analysis. With the rapid advance of the methodology, its widespread use, often by
inexperienced personnel using poor or limited data, means that the techniques can and are
stretched too far. This report also addresses some of the abuses of probabilistic methodology.
1.2 PROJECT BACKGROUND
This project, entitled Probabilistic Methods Uses and Abuses in Structural Integrity, was
initiated under the UK Health & Safety Executives (HSE) 1999 Competition of Ideas, and is
being undertaken by BOMEL on behalf of the Hazardous Installations Directorate.
This report is the final report of the project and covers the three main tasks. The first task is a
review of theory and practice. The second task concerns the application of probabilistic design
methods to two case studies. The final task is to prepare guidelines for the correct use of the
application of probabilistic methods for design and assessment.
1.3 SCOPE AND OBJECTIVES OF THE REPORT
Disparate sources exist describing the basis of the different design methods in use and the
approaches to probabilistic analysis. The objective of this report is to explain the development
of the different methods of design and probabilistic analysis and the philosophy underpinning
them, to summarise the relevant theory and identify key references where appropriate, and to
present examples of in-service experience with the aid of two case studies. The main outcome
of the report is a set of guidelines for use by regulators and industry to assist in assessing work
that incorporates risk and reliability-based analysis and arguments.
The historical background to structural design including different design methods, the
development of codes and standards for design, and the development of risk and reliability
methods is introduced in Section 2. The various causes of structural failure and measures to
control them are considered in Section 3. The different procedures for deterministic and
reliability-based design and assessment are explained in more detail. Gross and human errors
are amongst the main causes of structural failure, and methods used to control them and to treat
them in probabilistic analysis are also discussed.
2
Probability, reliability, risk and uncertainty are often misused terms; these terms are explained
in Section 4. The different types and sources of uncertainty that influence the probability of an
event are discussed, and the objectives of structural reliability and the basic theory behind
structural reliability are also presented.
Some of the various methods of probabilistic analysis are outlined in Section 5, and Section 6
discusses some of the uses of reliability analysis and probabilistic methods.
Whilst much of the material in this report is generic in nature, the main objective of the study is
to consider probabilistic applications in the design and assessment of pressure systems.
Therefore, in Section 7, specific factors have been highlighted relevant to pipelines and pressure
systems containing hazardous substances.
In this report, the term structure is used to refer to the integrity aspects of such pressure
systems.
Some of the concerns with structural reliability analysis and risk assessment are discussed in
Section 8.
The guidelines for the assessment of reliability and risk analysis are presented in Section 9.
A Glossary of the main terms is presented in Section 10. The references are given in Section
10, and key references have been highlighted.
Finally, the two case studies are presented in Annexes A and B.

3
2. HISTORICAL BACKGROUND TO STRUCTURAL DESIGN
2.1 SUMMARY
This Chapter outlines the historical development of design methodology from the earliest
methods based on geometric ratio, to load factor and allowable stress approaches, partial factor
methods, limit state design, and finally probabilistic and risk-based design and assessment. The
development of codes and standards, which strongly reflect the advances of these
methodologies, is also discussed.
Modern engineering design involves two steps, whether explicitly recognised or not; these are:
The Theory of Structures, in order to determine the way in which a structure actually
carries its loads.
The Strength of Materials, in order to assess whether the structural response can safely
be withstood by the material; e.g. this may involve comparing (elastic) stresses with
material properties.
In practice, the two steps cannot usually be separated, and design must be iterative - section
properties must be assigned before structural forces and stresses can be evaluated; once stresses
are evaluated, section properties can be designed. Of particular interest in this document is how
safety concepts, and the basis of design procedures have been developed.
Design practice and methodology is evolving continuously. The historical development of
modern design methods began in the 50s, and can be briefly summarised as follows:
1950-1970 Development of structural safety concepts (e.g. Pugsley et al [2] in
the UK, Freudenthal et al [3] in the US, etc.)
1970-1985 Development of reliability theory and computational methods
1975- Reliability-based calibration of partial safety factors and
application of limit state codes
1985- Risk and reliability assessment, primarily of existing structures
Initially, structural design was based on experience and tradition, which in the most part relied
on trial-and-error. With increased understanding in mathematics and physics the effects of
loads on structures could be calculated, and knowledge of material and component behaviour
was developed through testing. A codified approach evolved, which restricted the working
stresses in each component to prescribed limits. Although straightforward to implement, the
prescribed or allowable stress design approach has a number of disadvantages and delivers
inconsistent levels of safety. Further developments in understanding led to the specification of
the performance of the structure through explicit limit states. The partial safety factors, often
accompanying limit state design codes, could be calibrated using reliability methods to account
for the statistical uncertainties in loading and resistance.
The direct use of probabilistic methods and structural reliability analysis techniques in design is
the latest step in the evolutionary process. Although probabilistic design is at present primarily
4
used for nationally important structures, or structures with high consequences of failure, e.g.
nuclear power plants, dams etc., its use is growing. Many Codes of Practice now have
provisions for reliability analysis; either in the calibration of project-specific partial safety
factors or as an alternative design approach, and a draft model code has been prepared for the
use of reliability methods in design and analysis [4].
Since the early 1990s, probabilistic and quantified risk assessment is routinely applied to
designs in many areas structural failure represents one failure event, often the most important,
in such risk assessments.
2.2 DESIGN METHODS
Deterministic design methods include the following:
Design by geometric ratio
Load factor design
Allowable stress design
Limit state design.
Much of the discussion below is based on the development of design methods in building
structures and bridges, since for much of the past these fields have led the advances in design
methodology. The design of pressure vessels, and pipelines in particular, has lagged behind,
and it is only recently that limit state methods and probabilistic methods are being applied to
these types of structures.
2.2.1 Design by Geometric Ratio
Before mathematics and science were applied to building work, design rules were largely based
on experience and tradition. Many of these rules were often based on geometric ratios to give
limits on what could safely be built. These rules were usually established by trial-and-error, and
their development involved frequent collapses. However, the approach has produced some
magnificent structures from classical times up to the gothic cathedrals and structures of the
Renaissance; many of these structures survive today. Such rules work well with masonry where
stresses are generally low and where failure involves rigid body motion; i.e. where the strength
of the structure depends on its geometry rather than the behaviour and strength of materials.
Generally, if such a structure is satisfactory, it would be expected to be satisfactory if built at
twice the scale.
Geometric ratios are still used in the building trade and as rule-of-thumb methods by designers
in a first attempt at section sizing.
Geometric ratios are also still used in many of todays codes and standards, and they form the
basis of the Classification Rules for the design of ships. In many steel design codes they are
used to categorise bending or compression members by likely failure modes (plasticity, inelastic
buckling, elastic buckling, etc), and to define limiting width-to-thickness ratios to determine an
effective width for example in stiffened plate design.
These types of rules are effective methods for simplifying codified design, and are particularly
useful where accurate answers could only be obtained by much more complex methods or finite
element (FE) analysis.
5
2.2.2 Load Factor Design
Whilst the stone in masonry structures is in one sense brittle, walls and arches seldom failed
by sudden fracture of the material. However, with the development of cast iron and its use in
building and bridge construction, first as columns and later beams, such fractures and sudden
failures did occur.
Until at least the 1850s, most approaches in the UK to overcome this were based on large and
full scale testing, and proof loading. Telford, in the construction of the Menai Bridge which
was opened in 1826, load tested each bar to twice its anticipated load. W Fairbairn was also
noted for his experimental expertise; in the late 1840s he undertook an extensive experimental
programme to investigate the compressive buckling of thin plates forming large tubular sections
used in the Britannia and Conway bridges [5].
As outlined by Pugsley [2], scientific tests were used to investigate the strength of cast iron
columns in 1840 by Eaton Hodgkinson, and these were followed later by others (including L
Tetmajer in 1896 and A Ewing in 1898). The results of the tests were used to develop mean
strength formulae for columns relating slenderness ratio to axial stress at failure (first by W
Rankine in 1866, then by A Ostenfeld using Tetmajers results).
Pugsley [2] explains that at this time design was undertaken using a load factor whereby a safe
working load was determined from the mean failure load; from the very first formula developed
by W Rankine, the load factor was varied with the nature of the loading. The use of different
load factors was investigated further by E Salmon, largely following on from his work on
railway bridges. Live train loads, which were applied very rapidly, could lead to double the
stresses from permanent dead loads. Thus it was recommended by Salmon in 1921 that the
load factor for live loads should be double the dead load factor.
However, further tests showed considerable scatter in column strengths, and it was noted that
there was considerable difference between laboratory and practical test specimens. Some years
earlier W Fairbairn (in 1864) had also recognised the effect of flaws in large castings on the
strength of beams, and had noted the significant variation in strength between castings.
In 1900 J Moncrieff identified that three margins or factors should be adopted for the design of
columns to address the three main sources of uncertainty affecting column failure. These were:
1. Accidental overload leading to elastic instability should be prevented; Moncrieff
proposed that the working load should be restricted to one-third of the Euler critical
load.
2. Geometric imperfections in the column, due to out-of-straightness or load eccentricity,
should be allowed for; an equivalent eccentricity was allowed for in the column
formulae.
3. Imperfections in the material, a significant source of weakness in large cast iron
columns, should be allowed for. Moncrieff proposed reducing the average failure
stress to one-third of its value later a lower bound value came to be used.
These three aspects of column safety are still relevant today.
2.2.3 Allowable Stress Design
With the introduction of ductile materials - wrought iron in the nineteenth century and later mild
steel - the allowable stress approach followed on from the development of linear elastic theories.
6
These theories accurately represented the behaviour of the new structural materials up to a yield
stress which was taken to be the onset of failure. With the application of science and
mathematics to engineering, indeterminate structures could be analysed and the distribution of
bending and shear stresses could be worked out in detail. Much of our present theory of elastic
structures and material behaviour was largely developed in this period with work from such
leading scholars as Euler (born in 1757), Coulomb (1773), Navier (1826), Saint-Venant (1855),
Mohr (1874), Castigliano (1879), Timoshenko (1910), etc.
Further development in structural safety occurred following an inquiry set up in 1849
concerning the failure of a number of major railway bridges, including the Dee Bridge. The
inquiry heard evidence from I K Brunel, Robert Stephenson, Locke, Fairbairn, and many other
eminent engineers of the time. As well as addressing failure, engineers also became aware of
the need to consider, and commonly to prevent, the development of permanent set. The natural
way to present such calculations was the allowable stress format, in which the stresses caused
by the nominal or characteristic design loads should not exceed an allowable or limiting stress.
The allowable stress, as it is now defined, is the yield stress or failure stress of the material
divided by a safety factor. The factor was intended to cover the uncertainties in loading,
material strength and structural behaviour with an adequate margin of safety. The safety factors
were developed over time largely, if not exclusively, on the basis of experience, and were
rarely, if ever, explicitly stated in design standards.
2.2.4 Limit State Design
Until 1910 structural engineering went into a period of consolidation; this changed with the
First World War. With the rapid development of military aircraft during the War, it again
became common practice to demonstrate structural efficiency by testing to destruction. Biplane
wings were fabricated from timber spars and struts braced by steel wires; failure generally
occurred suddenly, and design came to be based on the specification of measured ultimate
strengths.
After the War the ever present need for efficiency and profit meant that much research effort
was devoted in many industries, in particular aeronautics, to methods of accurately predicting
the strength of redundant frames (this is still an important research area). Pioneering work by
Kacinczy and later Maier-Leibnitz, who carried out tests on clamped beams, showed that the
yield load and plastic collapse load were distinct. This led to the development of plastic theory
and methods in the 1940s, and an improved understanding of structural behaviour. J Heyman
[6], amongst others, gives further details of the historical development.
It became possible to define limits of structural performance, and in 1951 a committee under the
chairmanship of Sir Alfred Pugsley was set up to consider ways of specifying design safety
margins. Its report, published in 1955 [7], presented a tabular approach for the evaluation of
load factors based on subjective ratings given to five effects. The effects were grouped into
those influencing the probability of collapse (material, load, and accuracy of analysis), and those
influencing the consequences or seriousness of the results of collapse (personnel, and
economics). The load factor was derived from the multiple of the outcome of the assessment
for probability of collapse and the outcome for the consequences. The basic elements form the
rationale behind much of todays safety factor calibration and design philosophy.
The work of Pugsleys committee was devoted largely to the prevention of collapse; later work
addressed serviceability. It was suggested that two load factors could be considered, one related
7
to proof-load, defined as the load just sufficient to start appreciable permanent set, and the other
related to breaking load, or the load at which excessive permanent distortion arises.
Freudenthal sat on a committee with similar aims in the USA, which reported in 1957. Their
approach was more quantitative, and used reliability theory to address the uncertainty in the
basic parameters influencing failure. They considered risk and acceptable accident rates, and
the final results were embodied in tables that, according to Pugsley [2], showed a remarkable
degree in similarity in the overall results of the two processes.
During the 1960s and 70s further developments, particularly in Europe, led to the establishment
of limit state design methods.
2.3 CODES AND STANDARDS
2.3.1 The Historical Development of Codes and Standards
The Ancients put the onus for structural safety onto the designer/builder, and had clear rules
concerning the fate of a builder responsible for fatalities from a structural failure! The best
known example is from King Hammurabi who set down more than 280 rules or laws governing
life in Babylon in 1780 BC. The code included a number of rules for builders, which very much
followed the manner of the Codes most well known law that has become known as an eye for
an eye.
However, by the Renaissance period, failures were considered the price of progress and were
viewed as truly an Act of God.
In the UK one of the earliest laws relating to structures was proclaimed by James I in 1620,
which contained provisions relating to the thickness of walls, etc. This was followed by the first
comprehensive Building Act in 1667 following the Great Fire of London.
The Board of Trade (following recommendations of the 1849 Inquiry into rail bridge failures)
set the limiting stress for iron to be 5 tons/in
2
for wrought iron this corresponded to a factor of
safety of at least 4 [2].
By the late 19
th
century, with the rapid advancement in understanding of scientific and
mathematical knowledge, it was considered that engineers were, or should be, more in control
of nature. This led to the formation of the Engineering Institutions in this country and later to
codification and standardisation.
The standardisation of building materials was introduced in the early 1900s. The Engineering
Standards Committee was formed in 1904 by the various engineering institutions; the
committees publications became to be known as British Standards. The first standard involved
the standardisation of section sizes; others covered specifications for steel, and standardisation
for testing. The specification of a working stress limit for steel was introduced as early as 1909
by the London Building Byelaws. Similar regulations had been introduced in America around
the same time.
One of the first British Standards for structural design was published in 1922 for steel girder
bridges; this was based on permissible stresses and was the forerunner of BS 153 (the former
steel bridge code, published in 1958). BS 449, for structural steel design in buildings, was
published in 1932.
8
Many Building Regulations, and many of the Standards, of this period were very prescriptive in
nature, for example the London Building Act of 1935 specified the thickness of an external or
party house wall for a given height. The builders responsibility was only to see that the
regulation was satisfied. If the building then collapsed, the builder would be exonerated.
Regional Byelaws, and from 1965 the Building Regulations, frequently referred to British
Standards, and this gave rise to the standpoint that compliance to the standard was satisfactory
for design purposes. However, nowadays most British Standards for design state that:
Compliance with a British Standard does not of itself confer immunity from legal obligations.
Furthermore, most British Standards for design, in particular structural engineering design, are
in fact Codes of Practice.
There is some confusion in the industry between the terms code and standard.
Compliance with standards tends to be mandatory, whereas codes tend to be merely advisory
offering guidance on what the code committees considered to be best practice at the time of
drafting. Many engineers do not appreciate this distinction, and consider codes to have a higher
standing than they actually have. This situation is not helped by the fact that in the UK both
standards and codes of practice are published by the British Standards Institution and are
referred to as British Standards. Understanding is not aided by terminology such as Eurocodes
and International Standards.
Standards tend to be used for materials and products where compliance with that standard must
be achieved for a material or product to be acceptable. Codes of Practice tend to be used for
design purposes where what is required is a set of principles with accompanying design rules
that enable these principles to be achieved. Traditionally, UK Codes of Practice have acted as
handbooks for designers. This was not always so in mainland Europe where codes could be
relatively small, but large handbooks were developed to help with interpreting the codes.
Typically, Standards tend to be expressed in a prescriptive way, indicating how things should be
done and not justifying reasons for doing so, nor stating the aims to be attained. Many
standards and early codes have been empirical, being based on a limited series of tests. This can
make it difficult for designers looking to go beyond the bounds of the code as the limitations of
applicability and assumptions are not always stated. Similarly, when developing early codes,
there would have been situations where test data were not available and assumptions regarding
best practice would have had to be made by the code committee. In later years these
assumptions may well become cast in stone and treated with the same standing as those
clauses that were developed on the basis of relevant test data. Again, this leads to difficulties
when going beyond the bound of the code, for example in the reassessment of existing
structures. Codes also take a very long time to develop and ratify, which means that they do not
necessarily reflect the latest research and good practice.
The drawbacks with this approach were first recognised in the aviation industry, which in 1943
recommended that codes should be stated in terms of objectives rather than specifications [8].
That is, a code should define what is to be achieved and leave the designer to choose how this
will be achieved. This is the approach adopted by most modern codes.
The 1970s were a time of marked activity in code development (that is still continuing). As
discussed by Thoft-Christensen & Baker [9] the main features have been:
9
the replacement of many simple design rules by more scientifically-based calculations derived
from experimental and theoretical research,
the move towards Limit State design . ,
the replacement of single safety factors or load factors by sets of partial coefficients,
the improvement of rules for the treatment of combinations of loads and other actions,
the use of structural reliability theory in determining rational sets of partial
coefficients, and
the preparation of model codes for different types of structural materials and forms of
construction; and steps towards international code harmonisation .
Code drafting committees face a dilemma as, on one hand, they see the advantages and
flexibility that is offered by a code based on objectives. On the other hand, many designers
want prescriptive rules that enable them to produce designs quickly, safely and efficiently. This
is an issue that persists to the present day, particularly in relation to the new Eurocodes.
There were long delays in the development of early EC/EU product legislation, which was
based on the practice of incorporating detailed technical specifications in directives. The New
Approach policy for European product legislation (first adopted in 1990) has to some extent
solved this problem by specifying essential safety requirements in directives, supported by
technical detail in harmonised standards. This approach also enables the adoption of new
technology, since the essential safety requirements, which are goal setting, can be complied with
directly.
European Committee for Standardisation (CEN) is currently producing a suite of Eurocodes for
all of the major construction materials, structural types and loading. The majority of these
Eurocodes will become available for use between 2003 and 2005. They mark a departure from
the traditional basis of preparing codes in that they represent the views of all of the 19 European
countries represented by CEN, not just one nation.
Eurocodes make a distinction between Principles and Application Rules. Principles are
differentiated from Application Rules by applying the letter P following the number. The
Principles comprise:
general statements and definitions for which there is no alternative, as well as:
requirements and analytical models for which no alternative is permitted unless
specifically stated.
The Application Rules are generally recognised rules that enable the designer to comply with
the Principles. Alternative Application Rules can be used to those given in the Eurocodes.
However, the user has to demonstrate that the alternatives satisfy the relevant Principle and are
at least equivalent with regard to resistance, serviceability and durability to the Application
Rules contained in the Eurocode.
With the current UK HSE goal-setting regime, it may be considered that safety regulations have
gone full circle, and moved away from prescriptive requirements to put the onus for safety back
on the designer/builder.
10
2.3.2 Limit State Design Codes
The concepts of limit state and of probabilistic safety were first presented by Max Mayer in a
thesis published in 1926. Although the concepts were well expressed, it was not until the
middle 1940s that limit state methods were first introduced into design codes in the USSR; this
was the first codified attempt to link all aspects of structural analysis, including the specification
of loads and the analysis of safety. The ideas and use of partial factors were subsequently
adopted in 1963 by the Comit Europen du Bton (CEB) [10] for reinforced concrete design.
In the late 1970s, the first attempt at unifying design rules for different types of structural
materials was undertaken [11] by the Joint Committee on Structural Safety (JCSS). JCSS also
prepared General Principles on Reliability for Structural Design (later used by the International
Organisation for Standardisation (ISO) in the revision of ISO 2394 [12]). The work of the
international JCSS formed the basis of the development of the Eurocodes. The Eurocodes
contain a general principles section, Eurocode 0 (EN 1990 Basis of Design), which gives
guidelines on structural reliability relating to safety, serviceability and durability for general
cases and those cases not covered by the other structural Eurocodes. As such, codes can be
developed for other materials or structures not covered by the Eurocodes which will be
compatible in concept and reliability with the main structural Eurocodes.
Load models in the American ANSI Standard A58 Building Code Requirements for Minimum
Design Loads in Buildings and Other Structures [13] were developed using probabilistic
criteria in the 1980s. In addition, a reliability-based code for the design of structural steel
buildings was developed by the American Institute of Steel Construction (AISC) [14].
For offshore structures, the first probability based limit state code was introduced in 1977 by the
Norwegian Petroleum Directorate (NPD) [15]. The Canadian CSA code for the design,
construction and installation of fixed offshore structures was introduced in 1989 (revised in
1992 [16]), and is believed to be the first code to use explicit target reliabilities. The American
Petroleum Institute (API) also commissioned work (by Professor Moses [17]) in the late 1970s
to develop an LRFD (Load and Resistance Factor Design) version of the popular and widely
used WSD (Working Stress Design) version of RP2A, but it was not until 1993 that the 1st
Edition of RP2A-LRFD was published [18] (although it has still not been accepted for use in the
US). RP2A-LRFD forms the basis of the forthcoming ISO Standard for Fixed Steel Structures
[19], but the safety factors, and in particular regional load factors, have not yet been calibrated
(although work is underway).
2.3.3 Reliability Methods in Codes
Many current Codes of Practice allow for the explicit calibration of project-specific partial
safety factors for unusual or novel structures, or structures subject to special circumstances -
typically the safety factor calibration would be undertaken using reliability-based methods.
Many current Codes also allow for the direct use of reliability methods in design. The DNV
(Det Norske Veritas) Rules for the design of fixed offshore structures have for many years
allowed three alternative design approaches: allowable stress, partial coefficient, and reliability
design. DNV also have a useful Classification Note for the practical use of structural reliability
methods [20] this is primarily intended for marine structures, but much of the material is
generic in nature. Since 1998 there has also been an ISO standard covering the general
principles for the use of structural reliability [12].
11
2.3.4 Model Code
In 1989 the first steps towards a model code for the direct use of reliability in design were taken
by Ditlevsen & Madsen [4]. This document is a proposal, or working document developed
under the auspices of the Joint Committee on Structural Safety (JCSS). Unfortunately, little
further development has been published and the document, which is viewed by some to be
rather academic, is not believed to be widely used.
Renewed effort by the JCSS is currently underway to develop a new JCSS Probabilistic Model
Code [21]. A draft was first published on the Internet in March 2001, and is intended to be
adapted and extended a number of times to cover all aspects of structural engineering.
2.4 RISK AND RELIABILITY METHODS
The subject of structural reliability assessment has its origins in the work of Freudenthal [3],
Pugsley [2], Torroja and others carried out in the 1940s and 1950s. From these early times
when the basic philosophy and some simple calculation procedures were first conceived, there
have been extensive and far-reaching developments, so that at the present time there are now
well-developed theories and a number of basic methods which have very wide support on a
global scale.
Research into Probabilistic Risk Assessment (PRA; similar terms used in other industries
include PSA probabilistic safety assessment, and QRA quantified risk assessment) in the
offshore industry started in the late 1970s [22] based on experiences in the nuclear and
aeronautical industries. The first guidelines on risk assessment for safety evaluation were
published by NPD [23]; these required risk assessment studies to be carried out for all new
offshore installations at the conceptual design stage.
Probabilistic methods and risk assessment also started to be applied in the process industry in
the late 1970s following a number of major disasters, Flixborough, etc. It is now a well
established tool for assessing most types of planned and existing chemical and hazardous
materials installations, i.e. major accident hazard installations.
Methodologies for both reliability and risk analysis were advanced significantly in the 1980s,
and with advances in cheap computing software was developed. Techniques for accounting for
the benefits of inspection on reliability using Bayesian updating were developed from research
in the aeronautical and offshore industries.
The Piper Alpha Disaster in 1988, and subsequent inquiry reported by Lord Cullen in 1990 [24]
was the impetus for fundamental changes in the way in which safety is managed and regulated
in the UK, especially for the offshore industry. There has been a move away from prescriptive
regulations to those which set safety goals, allied to greater emphasis on the explicit assessment
of risks and their management.
The three key features of the UK approach are:
hazard identification
risk analysis
formal demonstration that major risks had been reduced 'to as low as is reasonably
practicable' (ALARP).
12
QRA is used by regulators and industry for the assessment of risks from onshore hazardous
installations, and is becoming an increasingly widely used tool within the major hazard and
transportation industries in the UK. In particular, PRA and QRA methods are now being
applied to pipelines; this was led by offshore pipelines, but the methods are now increasingly
being applied to onshore lines as well.
The quantitative risk assessment process is based on answering three questions:
i. What can happen, i.e. scenarios?
ii. How likely is a scenario to happen and to lead to failure, i.e. probabilities?
iii. If a failure scenario does happen, what are the consequences?
As explained later in this report, structural reliability analysis is primarily concerned with
addressing the second question, and (usually) only those scenarios that involve structural failure.


13
3. DETERMINISTIC AND RELIABILITY-BASED DESIGN AND
ASSESSMENT PROCEDURES
3.1 SUMMARY
The fundamental aim of structural design procedures and maintenance decisions is to obtain a
structure with an economical design and a sufficient degree of reliability.
The reliability of structures is traditionally achieved by deterministic methods employing one or
more explicit safety factors and a number of implicit measures. The explicit safety factors
depend on the safety format adopted, which may be either allowable stress (also known as
permissible stress or working stress design) or partial factor design. Safety margins are
enhanced implicitly by a number of other factors, including the use of conservative estimates of
parameters, and using methods of analysis that give lower bound solutions to collapse loads.
Structural reliability analysis methods and probabilistic techniques have been used for a number
of years to assess and calibrate the safety factors in many design codes throughout the world.
For a number of years they have been used to set inspection and maintenance programmes,
particularly for offshore structures. These methods are now being used explicitly to design
structures.
This Chapter introduces the basic causes of structural failure and the various risk reduction
measures that are used to control them. The fundamental requirements of structural design are
presented and the various design approaches are then discussed in detail, including the UK
Safety Case Regulations. The main drawbacks with each approach, which provided motivation
for further development, are also discussed.
3.2 CAUSES OF STRUCTURAL FAILURE AND RISK REDUCTION MEASURES
Before looking at methods and principles of design, it is necessary to consider the causes of
structural failure. The basic causes of structural failure can be classified into four categories as
shown in Table 3.1. This classification only highlights the main causes, and is a gross
simplification of reality because structures rarely fail solely due one shortcoming, but due to a
sequence of events.
The table also shows some of the types of measures that may be taken to reduce the risk for
each category of failure cause.
Table 3.1 is based on a simple categorisation. Alternative categorisations include work by
Blockley [25], who suggests eight categories for the causes of failure and proposes an approach
to judging the likelihood of a structural failure based on a list of 25 questions.




14
Cause of failure Risk reduction method
Limit states
Overload: geophysical, dead, internal pressure,
temperature, wind, etc
Under strength: materials, instability
Movement: settlement, creep, shrinkage, etc
Deterioration: fatigue, corrosion, hydrogen
embrittlement, stress corrosion
cracking, erosion, etc
Increased safety factors
Testing (e.g. hydrotest)
In-service inspection
Accidental or random hazards
Fire
Explosion accidental, sabotage
Third party activity - impact
Design for damage tolerance
(including selection of material)
Protective measures (e.g.
pressure/explosion relief valves,
fire protection)
Event Control
Human errors or gross errors in
Design
Fabrication
Operation
Quality Assurance and Quality
Control
Independent
verification/assessment/peer
review
Event Control
Inspection/repair
Design for damage tolerance
Protective measures
Unknown phenomena Research and development
Table 3.1 Causes of structural failure and risk reduction methods
This report primarily concerns the first two categories for causes of failure in Table 3.1.
The methods used to reduce the risk of failure due to the first category are generally considered
to be fundamental design requirements. Failure under a design limit state generally occurs
because a less than adequate safety margin was provided to cover normal uncertainties in
loads and resistances.
Probably the most important step to reducing the risk of failure due to accidental or random
hazards is to identify the hazards - techniques such as HAZID and HAZOP analysis are used.
Once a hazard has been identified it becomes a foreseeable event, and limit states can be
developed so that it can be considered part of the fundamental design requirements. System
design methods to improve robustness and redundancy are also important steps for improving
damage tolerance.
The nature of human errors or gross errors differs from that of natural phenomena and normal
man-made variability and uncertainty, and different safety measures are required to control
error-induced risks; gross errors and their treatment are briefly discussed in Section 3.6.
Unknown phenomena are not addressed; for all but exceptional or unique structures, failure due
to totally unknown phenomena is now very rare. The prevention of failures due to unknown
15
phenomena that arise from a lack of knowledge within the profession as a whole is clearly
impossible. However, it is important to distinguish between unknown phenomena and
unidentified phenomena due to a lack of awareness of the designer. Unidentified phenomena
can be categorised as human or gross errors, and the risk may be reduced by independent
checking or peer review.
Typically, the risk reduction measures in Table 3.1 are either aimed at:
reducing the likelihood and/or probability of a failure event
increasing factors of safety
in-service inspection
assuring quality in design, fabrication and operation
reducing and/or mitigating the consequences of a failure event
increasing damage tolerance, through redundancy and material selection
event control measures, e.g. water curtain sprinklers, safe refuges, evacuation
procedures.
Where possible, the most effective measure is to seek to eliminate a risk through good design.
This principle of prevention is also stated in the EU Framework Directive, which indicates that,
for safety, risks should preferably be avoided. Clearly, it is not always possible to completely
avoid risks, particularly in pressure systems. Engineering is about looking for and facing up to
risks and minimising and dealing with them safely by adopting a balanced and informed
response.
3.3 FUNDAMENTAL DESIGN REQUIREMENTS
The ISOs (International Organisation for Standardisation) Standard definition of the
fundamental requirements for structures [12], which is as good as any, is given as:
Structures and structural elements shall be designed, constructed and
maintained in such a way that they are suited for their use during the design
working life and in an economical way.
The ISO standard for General Principles then identifies a number of requirements that should be
fulfilled, with appropriate degrees of reliability. These requirements for structures are:
- They shall perform adequately under all expected actions
- They shall withstand extreme and/or frequently repeated actions
occurring during their construction and anticipated use
- They shall not be damaged by events like fire, explosions, impact or
consequences of human errors, to an extent disproportionate to the
original cause [a robustness requirement].
The appropriate degree of reliability should be judged with due regard to
the possible consequences of failure and the expense, level of effort and
procedures necessary to reduce the risk of failure.
16
These requirements are fundamental for structural design, and most design codes and standards
adhere to them in some way or another. In traditional design codes the first requirement may be
termed the Operating or Normal condition, the second requirement the Extreme or Abnormal
condition, and the third a Robustness or Survivability requirement. In limit state codes the first
is termed a Serviceability Limit State requirement, the second an Ultimate Limit State
requirement, and the third a Progressive Collapse or Accidental Limit State requirement.
Some codes may also explicitly specify additional requirements. For instance the DNV Rules
[26] for offshore structures aims for structures and structural elements to be designed to:
- have adequate durability against deterioration during the design life of the
structure.
In the Norwegian offshore structure codes, NPD, DNV, the recent NORSOK standards, this is
termed a Fatigue Limit State requirement.
3.4 DETERMINISTIC DESIGN AND ASSESSMENT PROCEDURES
In traditional deterministic methods of design and analysis, required levels of structural safety
(structural reliability) are achieved by the use of:
Conservatively assessed characteristic or representative values of the basic design
variables; in conjunction with
A safety factor, or set of partial safety factors (partial coefficients) based on judgement
and evidence of satisfactory performance over a period of time, or more recently on
reliability-based calibration exercise; and using
An appropriate method of global structural analysis (e.g. linear static, linear dynamic,
nonlinear static, nonlinear dynamic, etc); together with
A particular set of equations defining the capacity of individual structural components
usually contained in the relevant Code of Practice.
Such deterministic methods of design or safety checking can be defined as Level 1 design
methods [27]. Level 1 methods are deterministic reliability methods and are defined as:
Design methods in which appropriate degrees of structural reliability are
provided on a structural element basis (occasionally on a structural basis)
by the use of a number of partial safety factors, or partial coefficients,
related to pre-defined characteristic or nominal values of the major
structural and loading variables.
As discussed in Section 3.5, the partial factors may be calibrated using higher level reliability
methods to achieve a specified target reliability.
The main disadvantages of all deterministic design methods are that:
(a) Properties and partial safety factors (where used) are often not given as best estimates
or most likely values, with the result that it is not possible to estimate the most likely
strength of the structure.
17
(b) The risk of failure or collapse, or the overload necessary to cause failure or collapse,
may vary widely for different structural members and components, and different types
of structure.
(c) The assumption that most design parameters are known constants rather than statistical
variables is in most cases a gross simplification.
(d) The safety factor approach is not so easy to apply in assessment of existing structures
and for making maintenance decisions.
The different deterministic design approaches are discussed further in the following sections.
3.4.1 Allowable, Permissible or Working-Stress Design, and Load Factor
Approaches
Allowable or permissible stress design, or working stress design as it is referred to in the US, is
a traditional elastic design method that has been used extensively for the design of many types
of structures worldwide.
The design formats in design codes or standards based on allowable stress principles is of the
form:
SF
1
R S
c c
(3.1)
where S
c
is the load effect or stress in the component due to the applied design loading,
R
c
is the specified resistance or design resistance of the component for the considered
failure mode, a function of the specified yield strength of the material,
and SF is the corresponding safety factor accounting for all uncertainties in load,
resistance, analysis methods, etc.
By factoring the yield stress the intention is for linear elastic materials that the stresses should
remain elastic, and for this reason the approach is sometimes referred to as the elastic theory
design method.
In this form, a load factor(s) is not applied and the design load equals the characteristic
load.
The safety factor may be implicit in the design checking formulae, it may be explicitly
stated, or it may be a combination of the two.
The basic philosophy is very simple, and this combined with its ease of application is
the main advantage of the approach.
However, as discussed below, complications arise for a number of reasons. These
complications have led to a number of interpretations of the basic format in different codes and
standards, and in different countries.
A significant complication arises with the format because, in general, each component in a
structure needs to be checked for a number of different combinations of loading. Many of the
sources of loading vary with time, and it would lead to unconservative design if all the sources
of load were considered to be acting with their full design value while maintaining the same
safety factor.
18
This is overcome in a variety of ways. In some codes, different characteristic loads or return
periods are specified for different combinations of loads. In others, the safety factor may be
reduced (or an increase in allowable stresses permitted) for combinations involving less frequent
or very short duration loading events (e.g. extreme storm or rare intense earthquake). Some
codes employ a mixture of the two methods.
The main disadvantage with the allowable stress approach is that all of the associated
uncertainties are incorporated into one safety factor. As a result, this approach may give
inconsistent safety levels which in general are conservative, but which in some cases can lead to
unconservative design.
This is particularly serious where loads from different sources (and different levels of
uncertainty) are in opposition. One of the most notable examples where this was the main cause
of failure is the Ferrybridge Cooling Towers in Yorkshire, which collapsed in 1965. The
gravity force just opposed the design wind pressure and led to an omission of vertical tensile
reinforcement over much of the towers. Collapse occurred because of an underestimate of the
design wind pressure. (The wind pressure was based on an isolated tower with no allowance for
the fact that there were eight towers closely grouped together, and due to a misinterpretation of
wind tunnel model test results.)
A further difficulty arises in the assessment of buckling; although the material may behave
elastically the member as a whole behaves nonlinearly. The allowable stress format must be
modified to accommodate this, with the unfortunate consequence that either the calculated
stress, the allowable stress, or both become rather artificial concepts and do not reflect the
elastic stresses that actually occur at failure.
There are also a number of shortcomings of elastic theory design methods when applied to
reinforced concrete. The stress-strain behaviour of concrete is time-dependent, and creep strains
can cause a substantial redistribution of stress in a reinforced concrete section which means that
the stresses that actually exist at the service loads bear little relation to the calculated design
stresses.
The philosophy of the allowable stress approach is also stretched in a number of other areas,
particularly bolted or welded connections. With the advent of computers and detailed analysis,
local areas of connections can often be shown to exhibit high theoretical elastic stresses. In
many circumstances for steel structures this is not a problem because of the ability of mild steel
to yield locally and redistribute forces. The search for improved analysis tools to better predict
the strength of connections, plates in bending, and redundant frames led to the development of
yield-line analysis and plastic theory.
A further criticism of the approach is that it does not provide a framework of logical reasoning
through which all the limiting conditions on a structure can be examined, i.e. deflections,
cracking, etc. It is often said that there is too much emphasis on elastic stresses and too little
emphasis on the limiting conditions controlling the success of the structure in use.
One modification is the load factor method, in which the safety factor is applied to the load and
not the material. The load factor is the theoretical factor by which a set of loads acting on the
structure must be multiplied to just cause structural or component failure (collapse).
19
The load factor method was originally used for brittle materials, particularly cast iron. It was
also popular with reinforced concrete design in the mid 1950s. With advances in knowledge of
the actual behaviour of structural concrete, design could be based on ultimate strength in
which inelastic (plastic) strains are taken into account. The load factor concept was also used in
the plastic theory of structures, and is used in BS 5950 for the design of steelwork [28]
(although in this standard the approach is perhaps more correctly denoted a partial factor
approach with the material factor set to unity).
The load factor method overcomes the difficulty with buckling in the allowable stress method,
but its disadvantage is that it becomes difficult to apply when the structure is composed of
different materials that require different safety factors.
3.4.2 Partial Factor, Partial Coefficient or Load and Resistance Factor Design
Approaches
The partial factor or partial coefficient approach, or Load and Resistance Factor Design
(LRFD) approach as it is referred to in the US, uses a number of partial factors that are applied
to the resistance terms for different component types, and also to the basic load types prior to
structural analysis. Basic load types depend on the type of structure, but include:
permanent loads, e.g. dead or gravity,
live loads, e.g. operational loads, pressure and temperature, etc.
dynamic loads, e.g. impact and shock loads, slugs in pipelines, etc.
environmental loads, e.g. wind, snow, etc.
The partial factors reflect the level of uncertainty of the basic terms, and vary in magnitude
according to the component and combinations to which they are applied.
The partial factor format for a basic component design check is at its simplest:
( )
|
|
.
|

\
|

= =

1
R R S L
k d d i i
(3.2)
where R
d
is the design resistance of the component for the considered failure mode,
S
d
is the internal design load effect (or stress) on the component, and is evaluated from
the most unfavourable combination of factored applied loads,

i
is the load factor, or coefficient, for load type i (> 1.0 for detrimental effects, < 1.0 if
beneficial),
L
i
is the load type i based on the characteristic loading (e.g. dead or gravity, live or
operational, environmental),
is the component resistance factor, or sometimes material factor, (in the US LRFD
format this is replaced by a -factor = 1/),
R
k
is the nominal resistance derived from formulae evaluated with the specified
characteristic values of material and geometry.
The advantage of the partial factor approach is that the uncertainty is reflected in both the
loading and the strength terms rather than a single safety factor as WSD.
20
However, the partial factor format represented in Eqn (3.2), although simple, is often a source of
confusion.
The main misunderstanding arises from the determination of the design load effect in a
component. This should be evaluated in practice by factoring the basic load cases to form a
load combination of the design loads before undertaking a structural analysis to determine local
member forces or stresses. Due to the large number of load combinations that have to be
analysed in practice, a short-cut is to undertake the structural analyses for basic load cases, and
then factor and combine the component load effects, forces or stresses, by superposition. For
linear elastic structures the two approaches are of course equivalent, and many engineers
(particularly those experienced in WSD methods, where the safety factor is applied at
component level, see Eqn (3.1)) may be unaware of the distinction. However, for structures
influenced by dynamic effects or nonlinear behaviour, even in part, superposition is no longer
valid, and the distinction is important.
Unfortunately, for some types of structures and analyses the load effects have to be determined
by factoring and combining the results of analysis for basic load cases. This situation arises
when the structure is in equilibrium (e.g. in an installation phase), and in particular where all or
part of the structure is buoyant (e.g. subsea pipeline spans).
Whilst Eqn (3.2) illustrates the principles of partial factor design, it is not general enough for
many situations, e.g. design checks involving stress interaction effects, composite materials, etc.
Thus, the basic format can be expressed more generally as:
( ) 0 R , S function
d d
(3.3)
Any number of partial factors can be introduced, and ideally each source or uncertainty should
have its own associated partial factor, although in practice this may make a code very unwieldy.
In general, partial factors are applied as follows:
A k k A d
/ A or A A = (3.4)
where A
k
is the characteristic or representative value of the variable,
A
d
is the design value of the variable
The format was first internationalised by ISO in 1983 [29], which introduced partial factors for
seven basic sources of uncertainty. This format formed the basis of BS 5400 [30], the
Eurocodes [31], etc. However, for a number of reasons the format was made more general in a
later revision of the ISO standard [12].
The internationally approved format in the ISO standard [12], which is of course intended for
use with limit state design (see Section 0), is:
( ) 0 , C , , a , f , F function
n d d d d
(3.5)
where F
d
are the design values of the actions (or loads), determined from
r f d
F F = where
F
r
are representative values of the actions,
f
d
are the design values of material properties determined from
f k d
/ f f = ,
21
a
d
are the design values of geometric quantities determined from a a a
k d
= ,

d
are the design values of the model uncertainties not included in the load and
resistance variables, they are determined from
D D d
/ 1 or = ,
C is a vector of serviceability constraints, e.g. acceptable deflection,

n
is a coefficient reflecting the importance of the structure.
Eqn (3.5) should be regarded only as a symbolic description of the principles, and each symbol
in Eqn (3.5) may be regarded as a single variable or a vector containing several variables. In
this generalised form the format is cumbersome, but in most cases many of the partial factors
are set to unity.
The internationally approved format uses partial factors as follows:

f
for actions (or loads) which take account of:
the possibility of unfavourable deviation of the action value (or load) from its
representative value separate factors may be defined for each type of loading
the uncertainty in the assessment of the effects of action (or loading), i.e.
unforeseen stress distribution in the structure, and variations in dimensional
accuracy achieved during fabrication

m
for materials which take account of
the possibility of unfavourable deviations in the material properties from the
characteristic value separate factors may be defined for each type of material
uncertainties in the conversion of parameters derived from test results into design
parameters.
a are additive geometric quantities, i.e. a a a
k d
= , which take account of
the possibility of unfavourable deviations in the geometric properties from the
characteristic value

D
for model uncertainties which take account of
uncertainties of models (i.e. the codified formulae used to predict capacity) as far
as can be found from measurements or comparative calculations

n
is a coefficient by which the importance of the structure and consequences of failure,
including the significance of the type of failure, are taken into account.
The representative value of an action (or load) is derived from the characteristic value and is
factored by load combination factors to take into account the reduced probability that various
loadings acting together will attain their nominal values simultaneously. A factor is also
introduced to account for favourable or unfavourable contributions from an action.
The values of the partial factors depend on the design situation and the limit state considered.
The basic format is often simplified in practice by combining together many of these factors, or
by taking some to be unity.
The partial safety factors applied to both loads and strength can be calibrated using reliability
methods; this is discussed further in Section 6.2. This permits the loading uncertainty to be
22
accounted for in the load factors, and the uncertainty in yield stress and resistance modelling to
be accounted for in the resistance and material partial factors. Whilst the partial factors may be
derived using structural reliability methods, this is transparent to a designer using the code.
A disadvantage with the partial factor approach, that is a prime reason why the approach is not
more widely and readily adopted, is that the increased likelihood of design error (because of the
increased complexity) may outweigh the benefits of a theoretically better method.
Limit State design philosophy
A limit state is generally understood as a state of the structure or part of the structure that no
longer meets the requirements laid down for its performance or operation. Thus, limit states can
be defined as a specified set of states that separate a desired state from an undesirable state
which fails to meet the design requirements. More generally, they may be considered without a
specific physical interpretation, such that a Limit State is a mathematical criterion that
categorises any set of values of the relevant structural variables (loads, material and geometrical
variables) into one of two categories - the desirable category (also known as the safe set) and
the adverse category (often referred to as the failure set). The word failure then means
failure of satisfying the Limit State criterion, rather than a failure in the sense of some
dramatic physical event.
In Codes of Practice, Limit States are considered to represent the various conditions in which a
structure would be considered to have failed to fulfil the purposes for which it was built.
Normally, limit states relate to material strength, but they are affected by use, performance,
environment, material behaviour, shape, quality, protective measures and maintenance.
Limit States may be defined for components or parts of a structure such as stiffened panels,
stiffeners, etc. or for the complete structural system i.e. pressure vessel, pipeline, etc.
A component, or system, may fail a limit state in any (one) of a number of failure modes.
Modes of failure (at both component and system levels) may include mechanisms such as:
yielding denting
bursting fatigue
ovality fracture
bending corrosion (internal and/or external)
buckling (local or large scale) erosion
creep environmental cracking
ratcheting excessive displacement
de-lamination excessive vibration
which, in the extreme, lead to the loss of
structural integrity
containment
The consequences of such failures can affect
safety of life
environment (e.g. pollution)
23
operations
economics
A limit state code may be based either on an allowable stress format or a partial factor format,
although most are based on the latter. In older traditional allowable stress codes the limit states
are normally inherent or implicitly implied within the code. In a limit state code they are
explicitly referenced.
Alternatively, the code-checking equations for the various limit states can be used (without
partial safety factors) in a reliability analysis to ensure that the failure probability of components
or the structural system do not exceed an acceptable target level.
The internationally approved format in ISO 2394 [12] for general principles is to categorise
Limit States as:
Ultimate limit states (ULS), which correspond to the maximum load carrying capacity,
and include all types of collapse behaviour.
Serviceability limit states (SLS), which concern normal functional use and all aspects
of the structure at working loads.
Conditions exceeding some serviceability limit states may be reversible; conditions exceeding
ultimate limit states are never reversible.
This is also the format adopted in the Eurocodes [31].
However for many codes, including other ISO standards, additional limit states are defined. For
example, two further limit states are defined in the (Draft) ISO standard 13819-1 for fixed
offshore structures [19], and some Norwegian standards (the DNV Rules for fixed offshore
structures [26] and subsea pipelines [32], and the NORSOK Standards [33] which largely
replace the DNV and NPD standards). These other limit states are as follows:
Fatigue Limit State (FLS) A condition accounting for accumulated cyclic or
repetitive load effects during the life span of the structure.
Accidental or Accidental damage Limit State (ALS) A condition caused by
accidental loads, which if exceeded implies loss of structural integrity. Two conditions
may be defined:
Resistance to abnormal loads
Resistance in a damaged condition
For some forms of structure, an ALS is sometimes referred to as a Progressive Limit State
(PLS).
Some examples of these Limit States for generic structures are listed here, followed by some
specific examples for pressure vessels and pipeline systems.
Ultimate Limit State (ULS)
This corresponds to the maximum resistance to applied actions which includes:
24
failure of critical components of the structure caused by exceeding the ultimate
strength or the ultimate deformation of the components,
transformation of the structure or part of it into a mechanism (collapse or excessive
deformation),
instability of the structure or part of it (buckling, etc.).
Serviceability Limit State (SLS)
This relates to limits of normal operations, which include:
deformations or movements that affect the efficient use of structural or non-structural
components, e.g. as would prevent pipeline pigging,
excessive vibrations producing discomfort or affecting non-structural components or
equipment (especially if resonance occurs),
local damage that affects the use of structural or non-structural components,
corrosion that affects the properties and geometrical parameters of the structural and
non-structural components.
Fatigue Limit State (FLS)
This refers to cumulative damage due to repeated actions leading to fracture.
Detail connections of structural components under repetitive loading are prone to fatigue,
examples include:
tubular joints in offshore structures subject to large numbers of wave cycles
stiffeners and attachments to road and railway bridges
connections in radio masts and transmission towers subject to wind induced vibration
welds in pipelines subject to start-up and shutdown cycles
nozzles, brackets and longitudinal connections to pressure vessels subject to
operational cycles.
Inspection and any required maintenance must be carried out in the field and generally with as
little interruption to operation and production as possible. Inspection is costly, can be
hazardous, and often difficult because of access limitations. Thus, there is a very strong desire
to prevent any fatigue failures initiating.
Accidental Limit State (ALS)
This Limit State is primarily used for offshore structures, where the intention is to ensure that
the structure can tolerate the damage due to specified foreseeable accidental events and
subsequently maintain structural integrity for a sufficient period under specified environmental
conditions to enable evacuation to take place.
For pipelines, impacts due to third party external interference, which may lead to pipeline
rupture, may be considered in this category.
These loads (actions) are often defined as events with return periods of 10,000 years or more,
compared with those for Ultimate Limit States that are generally based on events with return
periods that are a much smaller multiple of the design life, typically 50- or 100-year events.
25
Limit States for pressure vessel design
The (draft) Eurocode for pressure vessels prEN 13445-3 [34] is based on traditional permissible
stress methods, but it does include an Annex covering alternative design methods. In the
Design By Analysis method limit states are classified as either ultimate or serviceability:
an ultimate limit state is defined as a structural condition (of the component or vessel)
beyond which the safety of personnel could be endangered
a serviceability limit state is defined as a structural condition (of the component or
vessel) beyond which the service criteria specified for the component are no longer
met.
Limit States for pipeline design
A number of definitions for limit states for operating pipelines have been proposed. Most use
the concepts of Serviceability Limit State (SLS) and Ultimate Limit State (ULS) and many of
these are confined to these two limit states only. Descriptions or definitions vary, as illustrated
in the following, taken from References [32], [35], and [36]:
Reference
Serviceability Limit State
(SLS)
Ultimate Limit State
(ULS)
DNV [32] A condition, which if exceeded,
renders the pipeline unsuitable for
normal operations
A condition, which if exceeded,
compromises the integrity of the
pipeline
Oude Hengel [35] [A limit state] that may lead to a
restriction of the intended
operation of the pipeline
[A limit state] that could result in
burst or collapse of the pipeline
Zimmerman [36] [A limit state] related to functional
requirements
[A limit state] related to load
carrying capacity
Key Words
Impediment to: normal operations,
intended operation, functional
requirements
Loss of integrity, load carrying
capacity; burst/collapse,

Examples of Ultimate Limit States include leaks and ruptures. Examples of Serviceability Limit
States include permanent deformation due to yielding or denting.
As discussed above, the DNV Rules also use limit states for:
Fatigue Limit State (FLS) A ULS condition accounting for accumulated cyclic load
effects.
Accidental Limit State (ALS) A condition, which if exceeded, implies loss of
structural integrity and caused by accidental loads.
Kaye [37], takes a different, more practical, approach to the definition of limit states. He
defines four limit states, in descending order of severity, in the following way:
i. Major System Failure: causing or leading to sudden failure (rupture), possibly
resulting in fatalities, damage to installations and environmental damage.
26
ii. Minor System Failure: causing or leading to loss of containment, possibly resulting in
environmental damage.
iii. Operability: causing loss of operability, without loss of safety. The transport of
product is reduced or ceases. Pipeline operation may be recovered by repair or
revision of operating procedures.
iv. Serviceability: causing impairment and possible loss of serviceability. The pipeline is
able to operate but integrity is impaired. Remedial action may be necessary to service
or maintain the system.
3.4.3 Characteristic Values in Limit State Design
The term characteristic value for strength and load variables were originally introduced in the
late 1950s by the Conseil International du Btiment pour la Recherche, lEtude et la
Documentation (CIB) and were first discussed in the UK by Thomas [38].
Ideally in structural design, loading intensities and material strengths would be chosen on the
assumption that they represent the maximum load intensity to which the structure will be
subjected and the minimum strength that can occur. In reality few basic variables have clearly
defined upper or lower limits that can be sensibly used in design. A more rational approach is
to specify a characteristic value of load which has a stated small probability of being exceeded
during the life of the structure, and for materials, a characteristic value of strength to be
exceeded by all but some stated small proportion of test results.
The characteristic value of a basic variable X is defined as the p
th
fractile of X (taken towards
unfavourable values). Statistical uncertainty, which is often present due to small datasets in
practice, is included by defining a confidence level in the value, e.g. the 5% fractile at the 75%
confidence level. The basis of the selection of the probability p is to some extent arbitrary.
In practice a specified characteristic value (specified value) is used for design, since the actual
distribution of a particular material strength, for instance, will evolve with manufacturing
processes etc., and will vary from producer to producer.
3.5 PROBABILISTIC DESIGN AND ASSESSMENT PROCEDURES
Probabilistic analysis, based on structural reliability analysis methods, is an extension of
deterministic analysis since deterministic quantities can be interpreted as random variables of a
particularly trivial nature in which their density functions are contracted to spikes and in which
their standard deviations tend to zero.
Variations in the values of the basic engineering parameters occur because of the natural
physical variability, because of poor information, and because of accidental events involving
human error. In the past, emphasis has been focused on the former categories, but the last is
equally if not more important (see Section 3.6).
In addition to the uncertainties associated with the individual load and strength parameters
(basic variables) which are mentioned above, it is well known that both the methods of global
analysis and the equations used for assessing the strength of individual components are not
exact.
In the case of global structural analysis, the true properties of the materials and components
often deviate from the idealisations on which the methods are based. Without exception, all
27
practical structural systems exhibit behaviour that (to a certain extent) is nonlinear and dynamic,
and have properties that are time-dependent, strain-rate dependent and spatially non-uniform.
Furthermore, most practical structures are statically indeterminate and contain high levels of
residual forces (and hence stresses) resulting from the particular fabrication and installation
sequence adopted; in addition they often contain so-called non-structural components which are
normally ignored in the analysis, but which often contribute in a significant way, particularly to
stiffness. These differences between real and predicted behaviour can be termed global analysis
model uncertainty. In general, this is extremely difficult to quantify. Estimates of the
magnitude of this type of model uncertainty can be obtained by comparisons using more refined
analysis tools and sensitivity studies, or by full-scale physical testing.
As far as individual components are concerned, the design equations given in Codes of Practice
are generally chosen to be conservative, but there are often large variations in the ratio of real to
predicted behaviour, even when the individual parameters in the equations are known precisely
(e.g. Poissons ratio).
The variability in load and strength parameters (including model uncertainty) arising from
physical variability and inadequacies in modelling are allowed for in deterministic design and
assessment procedures by an appropriate choice of safety factors and by an appropriate degree
of bias in the Code design equations. In probabilistic methods the variability in the basic design
variables, including model uncertainty, is taken into account directly in the probabilistic
modelling of the quantities.
Following on from the definition of Level 1 methods in Section 3.4, methods of structural
reliability can be divided into two broad classes. From [27] these are:
Level 2: Methods involving certain approximate iterative calculation procedures to obtain an
approximation to the failure probability of a structure or structural system, generally
requiring an idealisation of the failure domain and often associated with a simplified
representation of the joint probability distribution of the variables.
Level 3: Methods in which calculations are made to determine the exact probability of
failure for a structure or structural component, making use of a full probabilistic
description of the joint occurrence of the various quantities which affect the response
of the structure and taking account of the true nature of the failure domain.
Typically, Level 2 methods use two values to describe each uncertain variable, i.e. mean and
variance; this may be supplemented with a measure of correlation between the variables, i.e.
covariance.
Level 3 methods include numerical integration, approximate analytical methods such as first-
and second-order methods, and Monte Carlo simulation methods; these are discussed further in
Chapter 5.
For completeness, Level 4 methods, as defined in [20], are:
Level 4: Methods that compare a structural prospect with a reference prospect according to
the principles of engineering economic analysis under uncertainty. Such decision
analysis considers costs and benefits of construction, maintenance, repair, and
consequences of failure.
28
3.5.1 UK Safety Legislation
The framework for UK Safety Legislation is provided by the Health and Safety at Work Act,
1974 [39].
Prior to 1974, legislation had been prescriptive, that is it set down what had to be inspected and
how often. Modern legislation is based on the concept of risk assessment and goal setting, and
puts the emphasis on a safe place of work and gives guidance on how this can be achieved,
leaving the plant owner to assess his own risk and implement a safety programme. The main
aspect of a goal setting approach is that legislation takes the form of setting objectives, leaving
the way they are met to the plant owner.
The Act supports the Control of Major Accident Hazard Regulations (COMAH) [40], which
came into force in April 1999. The COMAH Regulations require that all operators of sites
containing major hazards, such as refineries, chemical works, etc, introduce and fully document
measures for avoiding major accidents and limiting their consequences to people and the
environment. A policy for preventing major accidents (a major accident prevention policy or
MAPP) must be developed. The regulations implement the Seveso II Directive issued by the
European Community in 1996, and provide a more integrated approach than the earlier UK
Control of Industrial Major Accident Hazards Regulations (CIMAH) they replace.
Further regulations for specific types of installation are given by various Statutory Instruments.
For pressure systems, the primary document of UK safety legislation is:
The Pressure Systems Safety Regulations. SI 2000 No 128 [41]
Further legislation for the manufacture, supply and import of simple pressure vessels is given
by:
The Simple Pressure Vessels (Safety) Regulations. SI 1999 No 2749 [42], and
Amendment SI 1994 No 3098 [43]
For pipelines, the primary document of UK safety legislation governing the design, installation,
operation and maintenance is:
The Pipelines Safety Regulations. SI 1996 No 825 [44]
The four primary documents of UK safety legislation governing the design, selection and
structural maintenance of the various parts of offshore structures are:
The Offshore Installations (Safety Case) Regulations 1992. SI 1992/2885 [45]
The Offshore Installations (Prevention of Fire and Explosions, and Emergency
Response) Regulations 1995. SI 1995/743 [46]
The Offshore Installations and Pipeline Works (Management and Administration)
Regulations 1995. SI 1995/738 [47]
The Offshore Installations and Wells (Design and Construction etc) Regulations 1996.
SI 1996/913 [48].
These Safety Case regulations make a number of demands on structural engineers:
29
need for demonstration of safety against identified hazards. These demonstrations are
required to show that risks to people from major accidents are controlled to a level that
is as low as reasonably practicable (ALARP);
risk-based demonstration of control of natural hazards. Design criteria should be
examined until the ALARP criterion has been satisfied;
accidental hazards have to be explicitly assessed. Acceptance criteria can accept some
level of temporary disablement;
identification and independent verification of safety critical elements.
setting of performance standards. It has not been usual to set these for integrity
control to hazards; typically reliance has been based on design to prescriptive
standards. At present there is no suitable prescription and hence new thinking is
required for:
natural hazards (environmental, earthquake) the control is provided by structural
integrity alone and hence performance standards are required to be set to the
ALARP principle;
accidental hazards (fire, explosion, third party activity) the control is provided by
ranked measures from elimination and substitution to the use of protective
equipment. The required performance is set by combining performance standards
from management controls and those from other systems controlling the hazard.
One of these is structural integrity.
internal hazards (fatigue, corrosion) performance standards are usually set in
specifications and technical standards. It is important to ensure that the safety
provided by these technical standards is at a reasonably practicable level.
The outcome of these changes is that compliance with prescriptive standards (where they exist)
is not necessarily equivalent to satisfying the reasonably practicable criterion. Understanding
safety margins is required to comply fully with the changes. This understanding encourages the
use of tools to enable systems safety to be qualitatively, if not quantitatively, assessed.
3.6 TREATMENT OF GROSS OR HUMAN ERROR
Experience in many areas, including building structures [2, 49] and offshore structures [50],
shows that gross error is the dominant cause of structural failure. Understanding of the human
contribution to failures has grown substantially through studies of major accidents. Matouseks
work [49], for instance, based on investigation of 800 cases of major damage to building
structures, showed that human errors and gross errors contributed to 75-90% of accidents; the
contribution of failures that can be attributed to causes normally covered by rigorous reliability
analyses was only 10-25%.
A gross error may be defined as a major or fundamental mistake in some aspect of the processes
of planning, design, analysis, construction, use or maintenance of a structure that has the
potential for causing failure [9].
Human errors can be individual acts, and may be:
30
Deliberate acts - sharp practice, fraud, theft, sabotage, etc
Non-deliberate acts - Obvious inexperience, negligence
- Subtle new material, new structural type, new
construction procedure.
Human errors can also be influenced by the company management or culture, which may lead
to:
stress and overwork
bad practice, poor communication, etc.
3.6.1 Control of Errors in Deterministic Design
In principle, it should be possible to account for human errors or gross errors in design by
increasing the safety factors.
It has been widely shown that small adjustments to safety factors are ineffectual in mitigating
the effects of human error [9]. The interaction between safety factor and probability of failure is
illustrated by Beeby [51] in Figure 3.1, which was originally based on work by Lewecki for
Eurocode 2 in 1994. The figure shows that where the safety factors are greater than some level
X, the probability of failure is largely independent of the safety factor because the overall
probability of failure is dominated by unforeseen events and gross errors occurring. It is
generally accepted that current practice lies to the right of X, as there is little evidence that the
very different levels of safety in different countries lead to different rates of failure.
Safety factor
Overall probability of failure
Failure due to
mistakes or
unconsidered factors
Failure due to
parameter
uncertainty
L
o
g

p
r
o
b
a
b
i
l
i
t
y

o
f

f
a
i
l
u
r
e
X

Figure 3.1 Influence of safety factors on the probability of failure (from [51])
Since increasing safety factors is ineffective in reducing the effects of human errors, reliance
must be placed on control measures to reduce the risks to an acceptable level. It is often
assumed that the frequency of undetected errors is reduced to an acceptable level by inspection,
quality control and quality assurance measures. However, the effectiveness of existing control
31
measures is itself highly variable and depends to a large extent on the type and severity of error
made, and who is performing the checking.
Given the significance of human errors and unforeseen events, there is a strong reason for
designing the structure to be damage tolerant or robust. Robustness is the ability of a structure
to absorb energy, and is often defined as the ability of a structure to withstand accidents and
unforeseen events without suffering damage disproportionate to the cause.
In addition, some accidental scenarios develop over time and their consequences can be
mitigated to some extent by Event Control measures, e.g. evacuation procedures, water curtains
or sprinklers, etc.
3.6.2 Treatment of Errors in Probabilistic Assessment
In principle, it should be possible to account for human errors or gross errors in probabilistic
assessments by accounting for the uncertainty in the basic variables, or by modifying the
evaluated probability to account for the probability of such errors.
Gross errors and human-based errors are, by their very nature, difficult to deal with. Work has
been undertaken by a number of authors to estimate the probability of a human error in the
design phase; as may be expected because of the wide-ranging causes summarised above, the
results are variable.
However until now, human errors or gross errors are rarely taken into account in reliability
analysis calculations.
It is clear that a rational solution to structural safety problems cannot be achieved without due
consideration of human error. It is also widely accepted that structural reliability analysis is not
a suitable tool for addressing human errors.
32


33
4. STRUCTURAL RELIABILITY THEORY, UNCERTAINTY
MODELLING AND THE INTERPRETATION OF PROBABILITY
4.1 SUMMARY
This Chapter discusses in detail the interpretations that are made of probability and probabilistic
measures, since this understanding is central to the problem. The objective of structural
reliability analysis is introduced, and the types of uncertainty that are considered in reliability
analyses are discussed.
The terms risk, probability, reliability and uncertainty are given different meanings and
interpretations in various sectors of industry and by the public at large; often they are treated as
synonyms. However, much of the confusion in the topic arises from vague language, ill-defined
and inconsistent terminology, and misinterpretation. This Chapter aims to define the terms
more clearly.
4.2 FREQUENTIST VERSUS BAYESIAN INTERPRETATION OF EVALUATED
PROBABILITIES
Philosophers have struggled for centuries over the question of what exactly probability is.
There are two basic philosophical schools in modern theory, one based on a frequentist
interpretation and one based on a Bayesian interpretation or degree of belief. These are also
known as the objective and the subjective interpretation, respectively.
One of the main difficulties, if not the main difficulty, with the use of failure probabilities
evaluated using reliability analysis methods is the interpretation of the result. This is a
particular concern when evaluated failure probabilities are compared or combined with failure
probabilities derived from other sources (e.g. historic failure rate data) in risk assessments.
4.2.1 Frequentist Interpretation
In this philosophy a probability is an objective property of some event.
In terms of failure probability it is the expected probability of failure that is reflected. Clearly, a
structure with an annual failure probability of 0.01 cannot fail by 1%; a structure either fails or
it does not fail. A frequentist interpretation implies that for 1000 nominally identical, but
uncorrelated structures, on average 10 will fail in any year.
Probability is related to relative frequency because of its early associations with games of
chance. Gambling can be traced to the earliest civilisations, and has been extensively studied by
many great mathematicians including Galileo. However, gambling is not the sole motivation
behind the development of probability theory. Bernoulli in 1713 developed an important
theorem, the theorem of large numbers, which effectively states that if an experiment is
performed a large number of times then the relative frequency of an event is close to the
probability of that event. This concept of statistical regularity is the foundation of much of
todays insurance underwriting business, which bases much of its assessment of premium costs
on actuarial statistics of accident rates.
The frequentist interpretation of probabilities can be further subdivided into:
34
a priori probabilities, e.g. games of chance - poker, roulette, baccarat, etc., where the
odds of an outcome can be derived exactly from a knowledge of the system;
empirical probabilities, where probabilities, or rather statistics, are obtained from past
data. Complete knowledge of the system or sample space rarely exists, and such
statistics must often be determined from sample data.
4.2.2 Bayesian or Degree-of-Belief Interpretation
In this philosophy a probability is considered as a subjective degree of belief, for example, in
the chances of a particular event occurring. This probability, or degree-of-belief, depends on
the amount of information available. It is often referred to as the Bayesian school or Bayesian
philosophy after Thomas Bayes, a Presbyterian church minister.
Bayesian probabilities are also called subjective, and are sometimes termed credibilities.
The knowledge about some existing unique object (or event) may be more or less uncertain. It
may range from the purely subjective (i.e. professional judgement with no qualification) to a
classical case that reflects the degree to which available information supports a given
assumption. Such uncertainty may conveniently be modelled in probabilistic terms. This type
of model does not describe properties of the object (or event), but properties of the knowledge
about the object (or event).
Laplace observed that probability is relative in part to (our) ignorance, in part to our
knowledge.
Thus, probability can be interpreted as conditional. This can be illustrated by considering the
question: What is the probability that today is Mr Ys birthday? The frequentists answer is
1/365, or 1/365.25 to account for leap years. However, there are many other answers,
depending on the knowledge of the observer. With knowledge of Mr Ys personality one could
consult an astrologist; better would be to gather the opinions from a number of expert
astrologists the views could be used to weight particular Zodiac signs. Mr Ys answer would
be 0 or 1, since it either is or it isnt his birthday.
For this reason, rather than considering the probability of event X, a better definition is to
consider the probability of X given (conditional on) all the conditions and information available
or assumed, A. Symbolically, this is written P(X|A) rather than P(X).
The Bayesian interpretation has proved to be the most fruitful approach for structural reliability
analysis, as it is possible to introduce model uncertainties and statistical uncertainties into the
analysis. This degree-of-belief interpretation is why these probabilities are referred to as
notional.
With the Bayesian interpretation the evaluated safety measure, or reliability, changes with the
amount and quality of the information on which it is based. Thus, rather than being a scientific
approach aiming at a description of the truth of nature, structural reliability theory is
considered to be a comparative tool; one of its main uses is in decision analysis.
With this approach the reliability modelling should be developed to be sufficiently rich in
formal elements and rules to allow for the inclusion of all types of relevant information where it
makes sense to let it affect the decisions. However, it is important that it is not too rich that the
35
user of the reliability theory has to make almost non-verifiable property assignments or
modelling assumptions to which the design decisions are unreasonably sensitive.
4.3 RELATIONSHIP BETWEEN RISK ANALYSIS AND RELIABILITY ANALYSIS
Extensive use is currently being made of risk analysis in many areas, and is encouraged in the
development of safety cases for HSE. In this document, the term risk analysis is used in a broad
context to refer to economic value analysis and decision making.
The risk analysis approaches often used in the safety assessment of process or plant operations
are generally referred to as quantitative risk assessments (QRA). QRA can be defined as the
formal and systematic approach of identifying potentially hazardous events, and estimating the
likelihood and consequences to people, the environment and resources due to accidents
developing from these events. (Risk assessment is discussed further in Section 5.12).
A number of definitions of risk may be found in the literature. Of relevance here is a definition
based on a function of the probability of failure and the consequences of failure.
Conventionally, the function is defined as the product of the two terms. Thus,
( ) es Consequenc P es Consequenc , P function Risk
f f
= = (4.1)
The probability of failure or frequency of an event is expressed as "events per time", usually per
year. Consequences can be expressed as the number of people affected (injured or killed),
amount of leak (or area affected), or money lost. Consequences are expressed "per event".
Risks can only be compared if they are based on the same consequence measures. Thus, it may
be necessary to compute risks using two or more consequence measures. In some industries
monetary value is attached to lives lost, but this is not particularly satisfactory.
For pressure vessels, pipeline systems, and bridges, offshore structures, and many types of
buildings, the probability of partial or complete failure of the structural integrity during the
service life is one input, often the key input, into a risk assessment.
However, reliability techniques, or rather structural reliability analysis (SRA) techniques to
determine structural failure probabilities, as discussed in this Report, differ from the other
techniques and input to typical QRAs.
Primarily, structural reliability analysis (as discussed further in Section 5) uses probability
distributions to model the uncertainty in the basic engineering variables influencing the problem
in order to synthesise the probability of component or system failure. Whereas, the main input
to process or plant safety assessments are failure rates, generally based on actuarial statistics, for
individual components (i.e. pumps, valves and switches) of the system.
Statistics and probability can be, and often are, confused. Probability applies to events that
have happened, may be occurring, or may yet occur. Statistics, on the other hand, applies only
to events that have happened.
The use of actuarial statistics is not generally practicable for structures since individual
structural components rarely fail, and knowledge of overall frequencies is usually of little value
36
because of the non-homogeneous nature of populations of components and the loads that are
applied to them.
Thus increasingly, the results of structural reliability analysis are being combined with failure
rates for process operations in order to assess the failure probability of complete systems;
structural failure is often only part of the total failure probability. Because of the differences in
the input data and evaluation methods, results from the two methods are not necessarily
consistent.
The important point is that risk analysis and structural reliability analysis are not fundamentally
different and it is important that in future these techniques become fully integrated.
4.4 OBJECTIVE OF STRUCTURAL RELIABILITY ANALYSIS
The objective of structural reliability analysis is to determine the probability of an event
occurring during a specified reference period.
There are two important points to note.
Firstly, the probability refers to the occurrence of an event. Events are usually defined in terms
of the exceedence of a criterion or limit state, or the failure of a component or system, P
f
; but
they may also be defined in terms of non-exceedence, or non-failure or safety, i.e. 1-P
f
. For
structural reliability analysis, limit states are generally defined for ultimate failure, but other
limit states may also be defined, including serviceability or operability criteria. A failure event
may refer to the:
failure of a component
1
in a particular failure mode,
failure of a component from any of a number of specified failure modes,
failure of a group or system of components in a particular failure mode,
sequential failure of a number of components,
failure of a complete structural system
2
.
Secondly, the failure probability relates to the event occurring within a specified reference
period. Where life safety, or where comparisons with failure rates for other types of events or
hazards, are being considered the failure probability may be evaluated on an annual basis.
Where the failure probability is being evaluated for economic concerns, e.g. risk-based cost-
benefit assessments, lifetime reliabilities may be evaluated for some defined period, say 20
years. For some types of event, typically temporary conditions, a reference period may be
implied by the (short) duration of the event, e.g. a hydrotest. It is particularly important to
ascertain and understand the significance of the reference period when comparing evaluated
probabilities with targets.

1 The term component is used rather loosely, and may be a single structural item, i.e. a stiffener
or weld; it may be a collection of items, i.e. stiffened panel; or a complete pipeline system or
pressure vessel may be treated as a single component.
2 A system, in reliability terminology, is the combination of a number of individual failure events
for components and/or failure modes. Failure events may be combined in series or parallel.
37
The first point may be obvious, but it is vital to grasp its significance to understand reliability
statements. Often in reliability analysis reports it may be concluded that the probability of
failure of a structure is y 10
-z
(per year). However, only one mode of failure may have been
considered in the analysis. Other load effects may have not been considered, i.e. membrane
bending, torsion, etc. Other influences and factors that may affect failure may not have been
considered, e.g. fatigue, corrosion, damage, etc. In particular, human errors are not included,
e.g. a gross disregard of operating procedures. The latter point is one of the reasons that
probabilities evaluated using structural reliability techniques are often referred to as notional.
Usually, the theoretically calculated failure probability encompasses only those risks and
uncertainties that can be controlled or assessed by engineering design and analysis. Additional
risks due to gross and human errors are not explicitly designed for, and are not included.
4.5 TYPES OF UNCERTAINTY
A rigorous structural reliability assessment involves modelling all of the sources of uncertainty
that may affect failure of the component or system. This clearly involves modelling all of the
fundamental quantities entering the problem, and also the uncertainties that arise from lack of
knowledge and idealised modelling. These terms are referred to as basic variables. Basic
variables representing common engineering quantities include: diameter, wall thickness,
material and contents density, yield stress, maximum operating pressure, maximum operating
temperature, corrosion rate, etc.
The sources of uncertainty that are relevant to structural reliability analysis can be primarily
classified into two categories: those that are a function of the physical uncertainty or
randomness aleatoric uncertainties, and those that are a function of understanding or
knowledge epistemic uncertainties. These can be subdivided further; Figure 4.1 shows one
such categorisation.
38
Aleatoric or inherent
uncertainty
Type I
Epistemic uncertainty
Type II
i) Inherent
uncertainty in time
ii) Inherent
uncertainty in space
iii) Measurement
uncertainty
i) Statistical
uncertainty
ii) Model
uncertainty
Parameter uncertainty
Distribution type
uncertainty

Figure 4.1 Types of uncertainty
4.5.1 Aleatoric Uncertainties
Aleatoric uncertainty (originating from the Latin aleator or aleatorius meaning dice thrower)
refers to the natural randomness associated with an uncertain quantity, and is often termed Type
I uncertainty in reliability analysis. It can be further subdivided into three categories:
i. Inherent or intrinsic uncertainty in time
In reliability analysis variables that change with time are often referred to as
processes, stochastic processes or stochastic variables. Examples include the
fluctuations of pressure or temperature in an operating pressure system, wind
velocity, corrosion rate, etc.
ii. The inherent, intrinsic or physical uncertainty of an object or property in space
Examples include the natural variations of the strength parameters from one
specimen to another, the spatial variation in soil strength, and the incidence of
corrosion in a pipeline.
iii. The inherent uncertainty associated with the measuring device
With modern techniques, and properly calibrated instruments, this source of
uncertainty should be small, but it can never be fully eliminated.
Aleatoric uncertainty is quantified through the collection and analysis of data. The observed
data may be fitted by theoretical distributions, and the probabilistic modelling may be
interpreted in the relative frequency sense as discussed in Section 4.2.
39
Because this uncertainty is inherent it cannot be reduced, except through manipulation of the
underlying processes which give rise to it in the first place (for example through the imposition
of more stringent quality control procedures during fabrication or manufacture).
4.5.2 Epistemic Uncertainties
Epistemic uncertainty (originating from the Greek episteme, meaning knowledge) reflects a lack
of knowledge or information about a quantity, and is often termed Type II uncertainty in
reliability analysis. Epistemic uncertainty can also be further subdivided into:
i. Model uncertainty
This is due to the simplifications and idealisations necessary to model the
behaviour in a reliability analysis, or to an inadequate understanding of the
physical causes and effects.
ii. Statistical uncertainty
This is solely due to a shortage of information, and originates from a lack of
sufficiently large samples of input data.
Model uncertainty arises because many of the engineering models that are used to describe
natural phenomena, to analyse stresses and to predict failure of components are imperfect.
Models are often based on idealised assumptions, they may be based on empirical fits to test
results or observed behaviour, and variables of lesser importance may be omitted for reasons of
efficiency (or ignorance). In many components and structures, model uncertainties have a large
effect on structural reliability and should not be ignored.
It is very difficult to quantify model uncertainty adequately. The errors in the model may be
known relative to more elaborate models, but at any level of modelling there are errors relative
to the unknown reality. Model uncertainty is often assessed on the basis of experimental test
results, but this too has a number of limitations. Tests themselves are idealisations or
simplifications of reality, and are affected by scale, boundary effects, load rates, measurement
errors, etc. Tests are expensive, and the data need to be carefully screened to ensure that they
are consistent. Ideally, the test data should cover uniformly the full range of the applicability of
the model.
Model uncertainty may be expressed in terms of the probability distribution of a variable X
m
.
where
model using (response) strength predicted
(response) strength measured or actual
X
m
=
Statistical uncertainty can be considered to arise from:
Parameter uncertainty
This occurs when the parameters of a distribution are determined from a limited set of
data. The smaller the data set the larger the parameter uncertainty.
Distribution type uncertainty
This uncertainty arises from the choice of a theoretical distribution fitted to empirical
data. It is a particular problem when deriving extreme value distributions.
40
Often it may not be possible to differentiate between the two types of statistical uncertainty in
practice, since with limited data both the parameters and distribution type may be uncertain.
Statistical uncertainty can be divided further into statistical uncertainty due to variations in
time and statistical uncertainty due to variations in space.
Statistical uncertainty can be modelled by a variety of techniques, including classical statistical
techniques, Bayesian methods, and Bootstrapping3 [52].
Because epistemic uncertainty is associated with a lack of knowledge and/or information it
follows that it can be reduced through an increase in knowledge. In general there are three ways
to increase knowledge:
Gathering data
Research
Expert judgement.
Statistical uncertainty associated with variations of variables in time can, in principle, be
reduced by observing the phenomena for a longer period. Statistical uncertainty associated with
the variability of properties in space can be reduced by taking more measurements or carrying
out further tests. Model uncertainty can be reduced by further research and testing of the
phenomena, and by improved modelling.
The modelling of a basic variable can be updated with the help of expert judgement. This may
be informal, on the basis of one or two specialists opinions, or a number of techniques exist to
solicit, collate and analyse the views of a circle of experts the so-called Delphi technique.
Expert opinion can be incorporated into the reliability analysis using Bayesian methods.
Epistemic uncertainties influence the confidence in the evaluated failure probability, that is they
add to the uncertainty in the probability of failure. A problem with low epistemic uncertainty
leads to a failure probability with a high degree of confidence that tends towards the true
failure probability.
It is important to distinguish between uncertainty and ignorance. Ignorance reflects a lack of
awareness of factors influencing the problem or issue, and is not, and by its very nature cannot,
be included in a reliability analysis. Ignorance is a well-recognised weakness in QRA, where it
is manifested by an incomplete identification of hazards. Ignorance of this sort is not often
recognised in reliability analysis. In reliability analysis failure events are formulated
mathematically; in the early development of the reliability analysis methods applications were
theoretical and great care was taken over formal definitions. With the wider use of probabilistic
analysis the formal restrictions are not rigorously applied, and a lack of awareness or ignorance
of all the factors influencing failure of a system can become more significant. This is
particularly important when the results of reliability analyses are used to assess risk, and
particularly when they are combined with risks assessed from historical failure rate data and
other sources.

3 Bootstrapping is a technique that uses Monte Carlo simulation to generate samples of data from
a theoretical distribution fitted to the original data set; an assessment of the statistical
uncertainty can then be made from distribution fits to the generated samples.
41
4.6 STRUCTURAL RELIABILITY THEORY
Reliability is defined as:
f
P 0 . 1 y Reliabilit = (4.2)
where P
f
is the probability of failure of an event.
For a practical structure, with an annual failure probability of say 10
-4
, the reliability is 0.9999.
For a more amenable format, reliability is usually expressed in terms of a reliability index
(sometimes termed safety index), , i.e.
( ) ( ) ( )
f
1
f
1
P P 0 . 1

= = (4.3)
where ( )
1
is the inverse of the standard normal distribution function. (The standard
normal function has zero mean and unit standard deviation).
Values of for typical values of P
f
are shown below:
P
f
10
-1
10
-2
10
-3
10
-4
10
-5
10
-6
10
-7
10
-8
10
-9

1.28 2.33 3.09 3.72 4.26 4.75 5.20 5.61 6.00
Other indices are sometimes used, usually for simplicity, e.g. log
10
(P
f
).
has a geometric interpretation and was used in one of the earliest, and still most widely used,
definitions of reliability developed by Hasofer & Lind in 1974 [53]; see Section 5.8.2.
If the probability distribution of the margin of safety, z, for a particular problem is considered,
may under some circumstances also be interpreted as the number of standard deviations (
Z
) of
the mean of the safety margin (
Z
) from zero, as illustrated in Figure 4.2. The failure
probability P
f
is shown shaded in the figure. This is the basis of a definition of the reliability
index proposed by Cornell in 1969 (however, this definition should no longer be used since it
varies with the form of the failure function).
42

Figure 4.2 Illustration of safety margin distribution and reliability index,
The probability of failure is defined mathematically as a multi-dimensional integral:
| | ( )

= =
0 Z
X f
x d x f 0 Z P P (4.4)
where Z is the failure criterion or failure function (sometimes termed limit state function or
performance function) for the event,
and ( ) x f
X
is the probability density function for the basic variables, X.
A particular realisation of the failure function, Z, that is for a particular structure, is termed the
margin of safety.
As discussed in Section 5.6, the simplest failure function is of the form:
Load Resistance Z = (4.5)
Failure occurs when 0 Z , or Resistance Load > .
If the uncertainty in the resistance is modelled by a single variable R, and the load by a single
variable L, and if the two variables are independent, i.e. uncorrelated, then the joint probability
density function of the basic variables can be written as:
( ) ( ) ( ) ( ) s f r f s , r f x f
S R S , R X
= = (4.6)
43
In many reliability texts and papers the probability density function for the load and resistance
are illustrated together on a single axis in a figure similar to Figure 4.3.

Figure 4.3 Conventional illustration of probability of failure
The probability of failure is not represented by the area of the over-lapped curves (as is often
incorrectly portrayed), but can be developed as follows.
Since the two variables are independent, Eqn (4.4) can be written as a double integral.
( ) ( ) ( )


= =
0 Z
r s
S R X f
ds dr s f r f x d x f P (4.7)
Noting that the cumulative distribution function for a variable is given by:
( ) | | ( )


= =
x
X X
dy y f x X P x F (4.8)
Eqn (4.7) can be expressed as a single integral, or convolution integral:
| | ( ) ( )


= = dy y f y F 0 S R P P
S R f
(4.9a)
or alternatively
| | ( ) ( ) { }


= = dy y F 1 y f 0 S R P P
S R f
(4.9b)
44
The integrand in Eqn (4.9a) or (4.9b) is illustrated in Figure 4.3 (not to scale), and the shaded
area below the curve, represents the failure probability.
Also illustrated in Figure 4.3 is the mean safety margin, which in this simple case is the
difference between the mean resistance and the mean load. The design safety margin is less
than the mean safety margin because of the use of characteristic or nominal resistance
parameters (based on lower fractiles) and characteristic or nominal loading parameters (based
on upper fractiles), and by the use of partial factors which usually reduce the nominal resistance
and increase the nominal load. The ratio of the mean safety margin to the design safety margin
is the factor of safety of the design.
An alternative representation to Figure 4.3 is to plot each variable on a separate axis, as shown
in Figure 4.4. The two basic variables are represented on the horizontal axes, and the
probability density is represented as the vertical axis, and the (joint) probability density
function appears as a hill. The distributions, which represent the uncertainty in the basic
variables, are shown on the walls of the plot.
Also shown on the figure is the failure region where the failure function is less than zero. The
probability of failure is represented as the volume of the hill, i.e. the probability density, which
is within the failure region defined by the failure function.
In general, when there are more than two basic random variables, the figure should be
considered in multi-dimensional hyperspace.

Figure 4.4 Illustration of probability of failure
45
5. METHODS OF PROBABILISTIC ANALYSIS
5.1 SUMMARY
This Chapter discusses methods of Quantitative Risk Assessment (QRA) and Structural
Reliability Analysis (SRA).
The purpose of this Chapter is to introduce the basic procedure for undertaking a structural
reliability analysis. The basic methodologies for evaluating reliability are also discussed.
Probabilistic analysis is a wide topic, about which considerable material has been published. Of
necessity the information presented in this Chapter is in summary form. There are a number of
textbooks on the subject that describe the material in more detail. Thoft-Christensen & Baker
[9] still remains one of the best introductory books on the subject (although a little dated in
parts). Other textbooks include: Ang & Tang [54], Ditlevsen [55], Ditlevsen & Madsen [1],
Madsen, Krenk & Lind [56], Melchers [57], Thoft Christensen & Mourotsu [58]. The DNV
Classification Note 30.6 [20] provides guidance on the practical application of reliability
methods, and ISO 2394 [12] gives more formal information on the general principles of
reliability. A number of international and specialist conferences including:
ICOSSAR (International Conference on Structural Safety and Reliability),
ESREL (European conference for Safety and Reliability),
OMAE (Offshore Mechanics and Arctic Engineering),
ISOPE (International Society for Offshore and Polar Engineering),
BOSS (Behaviour of Offshore Structures),
Conference on Risk & Reliability & Limit States in Pipeline Design & Operations, etc.
and journals including:
Structural Safety,
ASME (American Society of Mechanical Engineers),
ASCE(American Society of Civil Engineers),
Computers and Structures, etc.
are also either dedicated to structural reliability or regularly feature papers on the subject.
5.2 STRUCTURAL RELIABILITY ANALYSIS PROCEDURE
The structural reliability analysis procedure is outlined by the following steps, and these are
discussed further in this Chapter:
i. Identify all significant modes of failure of the structure or operation under
consideration, and define failure events.
ii. Formulate a failure criterion or failure function for each failure event.
46
iii. Identify the sources of uncertainty influencing the failure of the events, model the basic
variables and parameters in the failure functions and specify their probability
distributions.
iv. Calculate the probability of failure or reliability for each failure event, and combine
their probabilities where necessary to evaluate the failure probability or reliability of
the structural system.
v. Consider the sensitivity of the reliability results to the input, i.e. basic variables and
parameters, and assess whether the beta-point (or design point) values are physically
feasible.
vi. Assess whether the evaluated reliability is sufficient by comparison with a target.

5.3 HAZARD ANALYSIS/FAILURE MODE AND EFFECT ANALYSIS
FMEA uses a bottom-up approach (as opposed to the top-down approach of fault trees). The
technique starts at the lowest level by postulating a failure mode or mechanism for each
component, and then investigates the consequences for the whole system.
A failure mechanism is defined as the manner in which the structure responds to a hazard.
(Note that this is a wider definition than the narrow definition used within the theory of
plasticity.) A combination of hazards and failure mechanisms leads, with a given probability, to
failure of the structure or its components. In assessing the safety of a structure it is crucial not
to forget one of the major hazards or failure mechanisms. The mere fact that one lists the
various phenomena is often more important than the complete analysis that follows.
Aids in preparing an inventory of causes of failure are data banks, literature studies, studies of
actual instances of damage, brainstorm sessions, experience with similar structures, etc. For
commonly encountered structures most hazards and failure mechanisms are recorded in
guidelines, manuals and Codes of Practice.
5.4 FAULT/EVENT TREE ANALYSIS
In reliability analysis it is very important to consider the system as a whole. Systems are
composed of many components (structural, mechanical, electrical, or even procedural), each of
which may be prone to hazards and failure mechanisms. Malfunction of some component may
in turn pose a hazard to some other component. The malfunctioning of one component may
sometimes lead directly to failure of the system i.e. a series arrangement or weakest-link; in
other cases components may compensate for one another parallel arrangement.
A useful aid in establishing an ordered pattern in many hazards, failure mechanisms and
components is provided by diagrams such as fault trees and event (or failure) trees.
5.4.1 Event Trees
An event tree is a graphical logic method for identifying the various possible outcomes of a
given event, known as an initiating event. A simple example for failure of the corrosion
protection (CP) system in a pipeline possibly leading to pipeline rupture is illustrated in Figure
5.1. The response of the system from the occurrence of the initiating event until its final
consequences is determined by the operation or failure (non-operation) of various components
47
or items. In process engineering event trees are often used to model the reliability of safety
systems designed to prevent an initiating event turning into a catastrophic event.
Event trees help to identify those events which are most critical and have the greatest impact
upon system failure. However, they can quickly become unwieldy for long sequences of tasks.
Failure of CP system No pipeline rupture
Severe external corrosion
Little/no external
corrosion
Pipeline rupture

Figure 5.1 Example of a simple Event Tree
5.4.2 Fault Trees
A fault tree is based on the opposite procedure to an event tree; starting from some failure event,
it is analysed how this may have been caused. Fault trees are constructed in a sequence of logic
gates descending through subsidiary events resulting from basic events at the bottom of the tree.
In drawing up a fault tree, symbols such as AND gates and OR gates are used. The AND gate
corresponds to a parallel arrangement and the OR gate to a series arrangement. A simple
example for part of a Fault Tree for pipeline rupture is illustrated in Figure 5.2.
The most important events, i.e. those events that contribute most to the probability of system
failure, may be determined from sensitivity analyses once the failure probabilities for the basic
event have been quantified or evaluated.
The main disadvantage of fault trees is that they can be difficult to construct, and considerable
care is required to ensure that the logic is correct.
48
Rupture of
pipeline
High pressure
Corrosion of
pipe section
Internal corrsion
External
corrosion
or
Failure of CP
system
Corrosive soil/
conditions
and
and

Figure 5.2 Example of a simple Fault Tree
5.5 STRUCTURAL SYSTEM ANALYSIS
The failure of a structure can rarely be defined by a single failure function for one failure mode
of a single component. In practice, because of structural complexity and redundancy, structural
failure involves the sequential failure of a number of components, each component may fail
from a number of potential failure modes, and the structure may fail along any of a very large
number of potential failure paths.
Commercial packages (such as PROBAN and STRUREL) contain powerful techniques for
modelling and analysing very complex systems of parallel and series event chains.
5.5.1 Series System
A series system fails when the weakest link fails. The most common uses of a series system are
to model the multiple failure paths of a structure, or the multiple failure modes of a component.
For example, a corroded pipeline section may fail by bursting, OR leaking, OR any other failure
mode associated with an uncorroded section.
The probability of failure of a series system may be evaluated from the union of the
probabilities for the individual events.
5.5.2 Parallel System
A parallel system fails when all of the links fail. The most common use of a parallel system is
for modelling the sequential failure of components in a single failure path leading to structural
failure. For example, complete collapse of a building may occur because of failure of one
49
component (column, say), followed by a redistribution of load and failure of the next
component, followed by a further redistribution of load and failure of the next component, etc.
Pressure systems are far less redundant, and do not usually act as parallel systems.
The probability of failure of a parallel system may be evaluated from the intersection of the
probabilities for the individual events.
5.6 FAILURE FUNCTION MODELLING
The failure function, Z, defines the event mathematically.
The simplest failure function is of the form
4
:
Load Resistance Z = (5.1)
Failure occurs when 0 Z , or Resistance Load > .
Failure functions may be defined using limit state criteria or design equations in codes of
practice (ideally with the safety factors removed), FE analysis
5
can be used, or they may be
defined from more fundamental principles. As a general rule for use in a reliability analysis, the
most accurate approach should be used wherever possible, since (as discussed below) this
reduces the uncertainty (Type II) in the problem and leads to more accurate reliability estimates.
For pipeline system, an example could include:
pressure internal resistance Bursting Z = (5.2)
The failure function could be defined for the complete system, or for individual failure modes,
etc.
More complex failure functions include interaction equations for combined axial and bending
stress, or axial and shear stress, etc.

4 Although it is generally not essential, it is good practice to normalise the failure function, and
rewrite it as:
( ) Resistance / Load 0 . 1 Z =
This is because when the failure probability is evaluated using iterative techniques, i.e.
FORM/SORM, the reliability analysis program will check for convergence to the failure
surface. Normalising the function ensures that consistent accuracy is achieved.
5 It is very computer intensive to link FE analysis directly with a reliability analysis program,
since a very large number of iterations is required to evaluate reliability. An alternative
approach is to undertake a number of FE analyses for a range of parameters, and use the results
to estimate the uncertainty in overall component (or system) resistance. Alternatively, a
response surface function can be fitted to the results, and this function can then be used in the
reliability analysis.
50
5.6.1 The Time Element
As discussed in Section 4.4, structural reliability and failure probability should always be
defined for a specified reference period of time. Two classes of time-dependant problems are
generally considered. These are:
overload failure
cumulative failure.
The analysis of overload failure can be greatly simplified if time-varying resistance effects (i.e.
fatigue and corrosion) are being ignored. Then failure of the structure or structural component
is most likely to occur - if it is going to occur - under the maximum load effect occurring during
the reference period or period of exposure. When overload failure is due to a single load
(action) variable or process, an extreme-type distribution, or a distribution with its mean value
equal to the expected maximum value in the chosen reference period may be used to model the
basic variable. Where there are more than two load variables it is necessary to consider the
combined load process, and generally of primary interest is the maximum of the combined load
process.
By treating the loading in this way the analysis is termed time invariant reliability analysis.
In the case of cumulative failure, due to fatigue, corrosion, etc., the total history of the load up
to the point in question is of importance. Failure may occur solely as the result of cumulative
loading, e.g. the formation of a through-thickness crack due to cyclic fatigue. It may also occur
as a result of a combination of cumulative load and overload, e.g. pipe rupture due to a high
fluctuation in pressure and local corrosion. This is discussed further in Section 5.10.
5.7 BASIC VARIABLE MODELLING
In principle, the aim should be to define the failure function in terms of basic engineering terms,
i.e. length, thickness, yield stress, maximum operating pressure, etc. These are termed the basic
variables. Clearly, this is not always possible, and less basic terms must sometimes be used
6

7
.
The choice of basic variables, and whether some should be combined, depends on the types of
data that are available to quantify the uncertainty. Thus, at this point it is important to consider
available data sources, and how and if the variables can be assessed. Clearly, the data source
depends on the variable; typical sources may include, but are not limited to, the following:
In-house company information
Operational records from similar structures/plant
Material data and plate tolerances from steel mills, CORUS, etc.
Geometric tolerances from fabricators/manufacturers

6 Lumping basic engineering terms into combined terms may lead to degradation in accuracy in
the probabilistic modelling. Sensitivity information on the basic sources of the uncertainties
influencing the reliability may also be lost.
7 Convoluting the uncertainty into a loading term and a resistance term offers a quick and useful
means to check the reliability results.
51
Data collected by BSI, Eurocode and/or ISO working groups and committees that may
be available
Published information in books, technical journals, and conferences, also University
reports and theses.
It is often necessary to screen the data for acceptability and consistency, particularly when data
from different sources are combined. It is also important to ensure that the data are
representative, e.g. is manufacturer issued data truly representative of the supplied product, or
does it only include samples that passed the acceptance criteria? Will steel be supplied by one
steel mill, in which case the test data should be relevant to that mill, or will it be obtained from
stockists, in which case more general steel data should be used.
Many, if not all, of the basic variables will have some uncertainty associated with their value
(even g, the acceleration due to gravity, varies with elevation and location). In addition, there
may be a number of uncertainties associated with the modelling of the basic phenomena and the
prediction of failure.
All sources of uncertainty should be considered and should be included in the reliability
analysis.
Basic variables with little or no associated uncertainty may be treated as deterministic. (It
should be confirmed that the reliability is not sensitive to the uncertainty in the variable by
evaluating parametric sensitivities.)
As discussed later, the uncertainty associated with the basic variables is represented by a multi-
dimensional probability density function (pdf). If all of the basic variables are, or can be
assumed to be, independent
8
the pdf for each of the variables can be defined separately. If two
or more variables do not vary independently they may either be modelled using bivariate or
multi-variate joint pdfs, or by defining the correlation between the variables using a correlation
coefficient or Covariance matrix.
A pdf for a single variable may be defined discretely, i.e. as a histogram. Indeed, in assessing
basic data for a variable, e.g. yield stress coupon results, a basic statistical reduction technique is
to plot a histogram of the data. Discrete random variables are only appropriate to particular
situations and are best avoided if possible, since they lead to numerical problems in iterative
reliability analysis techniques. Generally, continuous random variables or pdfs are used in
reliability analysis.
Typically, a continuous pdf is obtained by fitting a standard distribution function to the basic
data. Standard probability distribution functions include (but are by no means limited to):
normal or Gaussian (the well-known bell-shaped distribution),
lognormal,

8 Two variables, X and Y are independent if
( ) ( ) x f y x f
X Y X
=
The left hand term is the conditional pdf, and is interpreted as the probability distribution of X
given Y.
52
exponential,
Gumbel or extreme Type I,
Frechet or extreme Type II,
generalised Pareto,
2 or 3 parameter Weibull,
Rayleigh,
Student t,
Chi-squared,
Beta,
rectangular or uniform,
triangular,
trapezoidal,
Hermite polynomial, etc.
Some phenomena are theoretically modelled by particular distribution functions.
When no detailed information is available, normal or lognormal distributions should generally
be used; lognormal distributions should be used when the value of a variable cannot be negative,
e.g. yield stress.
Distribution functions may be defined in terms of the parameters for the distribution, or in terms
of the central moments of the distribution, i.e. expected value or mean, Variance or standard
deviation, skewness coefficient, kurtosis coefficient, etc.
The following procedure (based on [20]) is used to fit a distribution type and estimate the
appropriate parameters.
Based on experience from similar types of problem, physical knowledge or analytical
results, choose a set of possible distribution types.
Estimate the parameters for each of the chosen distributions by statistical analysis of
available observations of the uncertain quantities. Data reduction may be based on:
method-of-moments
least-square fits
maximum likelihood methods
visual inspection of data plotted on probability paper.
If there are several apparently equally valid distribution choices the following
techniques can be used for acceptance or rejection of the choices:
visual inspection of data plotted on probability paper
statistical tests, e.g. Kolmogorov and Chi-square
asymptotic behaviour for extreme value distributions.
If two types of distribution give equally good fits, it is recommended, particularly for
load variables, to choose the distribution with most probability content in the tail.
Depending on the quantity of data available to define the distribution model, the parameters
may themselves may be uncertain, and these may be modelled as random variables. This source
of uncertainty is termed statistical uncertainty.
53
This procedure works well for large data sets. However, it is important not to concentrate too
much on what is known, and ignore what is not known since for many variables it is often
difficult to obtain data. This is not a barrier to the use of reliability methods; in practice there is
rarely, if ever, enough data to fully quantify the uncertainty for all of the basic variables that
influence even the simplest problem. When data, or sufficient data, for a particular variable are
not available it is necessary to rely on expert judgement. The sensitivity of the evaluated
reliability to any such variables should be examined, and this should be borne in mind when
interpreting the results.
5.8 METHODS OF COMPUTING COMPONENT RELIABILITIES
As discussed above, the probability of failure is defined mathematically as:
| | ( )

= =
0 Z
X f
x d x f 0 Z P P (5.3)
In some cases this equation can be integrated analytically. In principle, the probability of failure
or reliability can be evaluated using numerical integration (trapezoidal rule, Simpsons rule,
etc.). In practice, this is not generally practical in structural reliability analysis because of the
number of dimensions of the problem - one dimension for each basic variable, and because the
area of interest is in the tails of the distributions. Nevertheless, it is occasionally used, and it has
the potential, with fine enough increments, of being able to evaluate exact answers.
However, there are a number of other more commonly used methods available for estimating
the failure probability, and four of these are discussed below:
Mean value estimates
First-order second-moment methods FORM
Second-order methods - SORM
Monte Carlo simulation methods.
5.8.1 Mean Value Estimates
Mean value estimates of the failure probability were the first attempt to evaluate failure
probabilities.
Consider a failure function that can be approximated using the combined or total uncertainties
of two variables expressed as:
S R Z = (5.4)
where R is the overall uncertainty in resistance,
and S is the overall uncertainty in loading.
If both of the random variables, R and S, are (or can be assumed to be) normally distributed and
independent, the reliability index, , can be evaluated using:
54
| | | |
| | | | S Var R Var
S E R E
+

= (5.5)
where E[ ] and Var[ ] are the expected values (mean) and variances (standard deviation
squared) for the variables.
If both variables are (or can be assumed to be) lognormally distributed, the reliability index can
be estimated from:
| | ( ) | | ( )
( ) ( )
2
S
2
R
e e
V V
S E log R E log
+

= (5.6)
where V
R
and V
S
are the coefficients of variation (CoV = standard deviation / mean) of the
variables.
This equation is inaccurate for CoVs larger than 0.15. A more accurate formula is:
| |
| |
( )
( )
( ) ( ) ( ) ( ) ( )
2
S
2
R e
2
S
2
R
e
V 1 V 1 log
V 1
V 1
S E
R E
log
+ +
|
|
.
|

\
|
+
+
= (5.7)
Mean value estimates for more complex failure functions can be analysed by expanding the
function as a Taylor series about the mean values of the random variables and ignoring the
higher order terms, so that:
( ) ( ) ( )
i i
n
1 i i
Z
Z
g
g Z g
|
|
.
|

\
|

+ =

(5.8)
The main drawback with mean value estimates is that they are not invariant to the form of the
failure function. The failure function is defined as a surface at Z = 0, therefore, it should be
possible to transform the failure function, for example by taking logs of the terms:
S R Z = (5.9a)
( ) ( ) S log R log Z
e e
/
= (5.9b)
However, the mean value reliability estimate from these two equations will not be equal.
5.8.2 First-Order Second-Moment Methods - FORM
9

To overcome the invariance problem with the failure function, it was found that it is necessary
to transform the basic variables into independent standard normal variables
10
. A space defined

9 FORM is an abbreviation for First-Order Reliability Method. Sometimes FOSM, First-Order
Second-Moment method is used.
55
by independent standard normal variables is termed U-space, and basic variable space is termed
X-space.
The transformation of independent variables can be undertaken from the cumulative probability
of the distribution, i.e. from the identity:
( ) ( ) x F u
X
= (5.10)
( ) ( ) u F x
1
X
=

(5.11)
where ( ) u is the standard normal distribution function,
and ( ) x F
X
is the cumulative distribution function for the variable.
The transformation of non-independent variables is widely discussed in many reliability
textbooks and papers [1, 9], and various techniques are available.
The transformation is generally undertaken automatically within most reliability analysis
software packages. However, for non-standard distribution functions, the transformation may
need to be undertaken explicitly.
First-order second-moment methods, or advanced level 2 methods, involve estimating the
failure probability by linearising the failure surface at the closest point to the origin in standard
normal space, or U-space, e.g. a Taylor series expansion of the normalised random variables
evaluated at the closest point rather than the mean (see Eqn (5.8)). It is usually necessary to
iterate to determine the closest point to the origin, and a number of iterative and optimisation
techniques are available. The space outside of the tangent hyperplane to the failure surface at
the closest point to the origin approximates to the probability of failure.
Even though the failure surface in standard normal space is rarely planar the curvature at the
point closest to the origin is usually so small that the first-order linearisation is valid for most
estimating purposes.
The basic variable transformation and first-order reliability estimate are illustrated in Figure 5.3
with two basic variables.
The closest point to the origin in U-space is the point with maximum probability density, and in
the literature is termed: the beta-point, the most central, the most likely, the point of
maximum likelihood, (it is also sometimes, rather confusingly, termed the design-point).
The distance in U-space from the origin to the beta-point is equal to the first-order reliability
index, . This is sometimes referred to as the geometrical reliability index or Hasofer-Lind
reliability index [53].

10 An independent standard normal variable is normally distributed and has a mean of zero and a
unit standard deviation.
56
The direction cosines of the variables of the vector from the origin to the beta-point are known
as -factors, and are sensitivity coefficients of the basic variables.
Failure region
Basic variable X-space
Failure region
U-space
First-order approximation
tangent hyperplane
Beta-point
Variable
transformation
b
e
t
a

Z<0
F
a
ilu
re
s
u
rfa
c
e

T
r
a
n
s
f
o
r
m
e
d

f
a
i
l
u
r
e

s
u
r
f
a
c
e

Safe region
Safe region

Figure 5.3 Illustration of transformation from basic variable space (left) to U-space
(right) and first-order reliability estimate
5.8.3 Second-Order Reliability Methods - SORM
Second-order methods improve the accuracy of first-order probability estimates by including
curvature information at the beta-point, and approximating the failure surface by a quadratic
surface. It is necessary to iterate to identify the beta-point first.
The difference between the first- and second-order estimates of the probability gives an
indication of the curvature of the failure surface. If there is a significant difference it suggests
that perhaps Monte Carlo methods should be used to confirm the probability of failure estimate.
For most practical reliability applications there is usually little difference between FORM and
SORM estimates.
5.8.4 Monte Carlo Simulation Methods
Crude Monte Carlo simulation offers a direct method for estimating the failure probability. In
essence, the technique involves sampling a set of values of the basic variables at random from
the probability density function, and evaluating the failure function for the values to see if
failure occurs. By generating a large number of samples, or trials, the probability density
function is simulated, and the ratio of the number of trials leading to failure to the total number
of trials tends to the exact probability of failure.
The drawback with crude Monte Carlo simulation is the computational effort involved. To
produce a reasonably accurate estimate of the failure probability at least 100 / P
f
trials are
required. For P
f
s around 10
-4
this requires that at least one million trials are generated.
57
A number of techniques have been developed to reduce the number of samples required
variance reduction techniques, and in favourable circumstances they can be very efficient.
These techniques include:
importance sampling modifying the sampling density function to important regions
of the failure space,
directional sampling sampling along random vectors,
adaptive sampling successive updating of the sampling density function,
axis orthogonal simulation a semi-analytic technique.
These techniques can also be combined together. Knowledge of the failure region (for example
from first-order methods) can be exploited to significantly improve the efficiency of Monte
Carlo simulation by tailoring the sampling scheme to the particular situation.
Monte Carlo simulation methods rely on the use of random numbers. For computer-based
modelling and analysis, random numbers are most conveniently generated numerically from the
computer. A number of types of random number generator are available, including
multiplicative congruence types, Fibonacci series, etc. However, it is important to realise that
these types of generator produce pseudo-random numbers that form a long sequence of numbers
which, although may be expected to pass all standard tests for randomness, will eventually
repeat. For most applications standard random number generators, often available as functions
in software libraries, are acceptable. However, there may be problems if a poor generator is
used to generate many millions of samples in a problem involving a large number of basic
(random) variables.
Standard random number generators usually generate uniformly distributed numbers in the
range zero to one. These must then be transformed to reflect the distributions and correlations
(if any) of the basic variables.
Pseudo-random numbers are only usually random from one to the next. When used in multi-
dimensional problems, i.e. problems involving a set of basic random variables, there may be
occasions when they show correlation; this should always be checked.
Nevertheless, if used intelligently, Monte Carlo methods are a readily understood and easily
applied tool. They can be used to produce exact answers to problems, and can be used to
provide answers to problems that cannot be accurately modelled using first- or second-order
methods. Such problems include load combinations problems and time-varying problems.
5.9 COMBINATION OF EVENTS
For each failure mechanism, for example bursting as a result of excessive corrosion, the failure
probabilities for each section should be combined to evaluate the total failure probability for the
pipeline zone (or for the whole pipeline). Then for each limit state the system probability is
derived by combining the failure probabilities for each failure mechanism.
A pipeline or pressure vessel can be considered to be a series system, since the failure of any
one part due to any failure mode is failure of the system. Thus, strictly the union of the
probabilities for each failure event should be computed. Simple bounds on the system
probability may be evaluated as:
58
( ) ( ) ( )

= = =

|
|
.
|

\
|
=
q
1 i
q
1 i
f f f
q
1 i
i i sys i
P 1 1 P P P
max U
(5.12)
In practice the bounds are likely to be too wide for practical use. However, they can be
improved by judging the likely correlation between the events. Thus, for events that are highly
correlated, for instance general corrosion between one section and the next, the combined
probability will be dominated by the maximum individual failure probability.
Hence for highly correlated events:
( )
i sys
f
q
1 i
f
P P
max
=
(5.13a)
However, for different failure modes and hazards most of the failure events will be largely
uncorrelated, and a reasonable and conservative estimate can be obtained by summing the
probabilities for the individual events.
Thus for largely uncorrelated events:
( )

=
=
q
1 i
f
i sys
P 1 1 P (5.13b)
5.9.1 Component and System Reliability Analysis
Software to evaluate component reliability and system reliability is required. Since many of the
calculations involve evaluating the probability of intersection for a number of events, reliability
software capable of handling multiple constraints and finding the joint failure point directly.
Figure 5.4 illustrates the joint failure point for the intersection of two events; the shaded area
shows the failure region.

Figure 5.4 Illustration of the joint failure point for the intersection of two events
59
5.10 TIME-DEPENDANT ANALYSIS
All of the earlier examples considered in this report have assumed that the resistance is invariant
with time, and that the time element of the loading has been modelled (in part) using random
variables with extreme value distributions.
In reality, the resistance of the structure is being degraded continuously through corrosion,
fatigue, wear and tear, abrasion or erosion, denting and accidental damage. This continuing
reduction in structural resistance as the structure ages clearly leads to an increase in the
probability that the resistance of the structure will be exceeded at some point and that the
structure will fail. This is shown in Figure 5.5, which illustrates a stochastic loading process
with a continuous degradation in resistance with time. In this figure the resistance is exceeded
by an extreme load event during the reference period, and the structure or component would
have failed.
Time T
R
R,S
R(t)
S(t)
Increasing
degradation

Figure 5.5 Illustration of a typical realisation of load effect and resistance variation with
time
It is clear that an accurate assessment of time-variant reliability is more involved than time
invariant analysis.
The problem can be analysed using stochastic process theory, in which case it is termed a time-
variant reliability analysis.
5.10.1 Annual Reliability
Alternatively, the reliability can be assessed by discretising the exposure period into small
intervals such that the resistance can be assumed constant over the interval. Generally, in the
60
case of corrosion for instance, it is necessary to assume some function for corrosion rate with
time - typically this is obtained from field measurements of typical installations, experience or
theory, and is clearly uncertain. For each interval the resistance can be assessed, and the
reliability for a short reference period, typically an annual period, can be evaluated. That is,
repeating the calculations at increments of time with a reducing resistance. This is illustrated in
Figure 5.6.
Annual reliabilities can either be interpreted as the probability of failure of the structure under
the one-year return period load event, or as one minus the probability of the structure surviving
for the next year. The first interpretation may be considered an upper bound to the failure
probability, and the second interpretation is a lower bound; for most applications the distinction
is academic.
Time T
R
R,S R(t)
S(t)

Figure 5.6 Illustration of problem with discretised resistance
5.10.2 Lifetime Reliability
In some circumstances the probability of failure during the lifetime (or for some other exposure
reference period) of the structure may be required. Lifetime reliabilities would be of interest in
risk and economic assessments.
If the reliability of the structure is required for a long reference period, e.g. the design life of the
structure, it is clearly approximate to follow the above approach using a loading model based on
an extreme distribution of N-year maxima for the reference period, and assuming that the
resistance is constant. This would assume that the resistance of the structure after N-years of
61
exposure, together with all of the associated uncertainty in the resistance, was constant
throughout the exposure period.
Instead, the reliability can be evaluated more accurately using the theorem of total probability
11
.
It can be shown that the cumulative failure probability during a reference period T
R
can be
evaluated as:
( ) ( ) dt t f t f P P
T
T
0
f
R

= (5.14)
where ( ) t f P is the probability of failure given that the N-year extreme event occurs at time
t.
If it is assumed that the occurrence of the extreme effect, can occur with equal likelihood at any
time during the reference exposure period, then T has a uniform distribution. Thus,
( )
R
T
T
1
t f = (5.15)
Therefore, Eqn (5.14) becomes
( ) dt t f P
T
1
P
R
T
0
R
f

= (5.16)
This can be approximated using the trapezium rule for unit intervals of time as
( ) ( )
( )
(
(

+
+
=

=
1 T
0 i
i
R 0
R
f
R
t f P
2
T f P t f P
T
1
P (5.17)
An extreme distribution of T
R
-year maxima is used for the load model of the extreme event, but
the resistance is discretised.
5.10.3 Conditional Reliability Given a Service History
If the analysis is being undertaken as part of an assessment for an existing structure the fact that
the structure has survived for a number of years provides some information about the structural

11
Given a set of n mutually exclusive, collectively exhaustive events,
n 2 1
B ..... , B , B , the
probability for another event A can be written as:
( ) ( ) ( ) ( ) ( )

=
= + + + =
n
1 i
i n 2 1
B A P B A P ..... B A P B A P A P I I I I
( ) ( ) ( )
i
n
1 i
i
B P B A P A P

=
=
i.e. the probability for an event A can be expanded in terms of conditional probabilities for
mutually exclusive and collectively exhaustive events.
62
capacity and safety that can be used to update the reliability. (The above approaches do not
make any allowance for the service history of the structure.)
Clearly, the fact that the structure has survived for a number of years of operation shows that
there are no gross errors in the design or fabrication. If the loading history of the vessel is
available (from strain gauge readings, operational records, etc.), the resistance of the vessel was
at least as high as any of the prior imposed loads at the time. (A proof load test or hydrotest
would give even more information). In this example it is assumed that the details of the loading
history are not known, however the fact that the structure has a service history for a number of
years can still be taken into account. Thus, if the service history is taken into account, and if the
degradation rate of resistance is not too severe, the year-to-year perception of the reliability may
not be reducing. This is a conditional reliability given a service history.
If the structure is operational and has performed satisfactorily for years, the probability of
failure during the remaining life of the structure
s
= T
R
T may be evaluated as a conditional
event given that failure has not occurred in the first years. This conditional probability may be
written as ( ) ( ) { } 0 T t Z 0 T t T Z ob Pr
R
> < < < .
From probability theory this conditional event can be expanded as
( ) ( ) { } 0 T t Z 0 T t T Z ob Pr
R
> < < <
( ) ( ) { }
( ) 0 T t Z
0 T t Z 0 T t T Z ob Pr
R
> <
> < < <
=
I
(5.18)
where ( ) 0 T t T Z
R
< < represents a failure event between time T and T
R
years
and ( ) 0 T t Z > < represents an event with no failure up to time T years.
The numerator in the above equation can be shown to be
( ) { } ( ) { } 0 T t Z ob Pr 0 T t Z ob Pr
R
< < (5.19)
The denominator, which is the probability of no failure up to time T, can be written as
( ) { } 0 T t Z ob Pr 1 < (5.20)
In a simpler notation the conditional lifetime failure probability given T years of satisfactory
service can be written as
( )
( ) ( )
( ) T P 1
T P T P
T T P
f
f R f
R f

= (5.21)
where ( ) T P
f
is the probability of failure in T years.
Alternatively, this can be written as
63
( )
( ) ( )
( ) T P 1
T P T P
T T P
f
f s f
s f

+
= + (5.22)
where
s
is the remaining service life in years (or any other reference period of interest).
5.11 ASSESSMENT OF TARGET RELIABILITY
Ever since structural reliability analysis methods have been developed and used for design and
assessment, and for the calibration of partial safety factors, the effects and consequences of
failure have been a primary consideration when deciding acceptable safety levels and setting
target reliabilities. A number of approaches have been proposed, these are largely based on:
societal values
comparison with existing and accepted practice
cost-benefit analysis
judgement.
5.11.1 Societal Values
One of the earliest published approaches is from the CIRIA Report 63 [59] which suggested that
the target failure probability should be based on the number of people at risk and a societal
criterion factor. The CIRIA target is based on:
d s
r
4
T
f
n K
n
10
P

= (5.23)
where n
d
is the design life of the structure (in years),
n
r
is the expected number of people at risk in the event of failure,
and K
s
is a societal criterion factor.
K
s
depends on the type of structure, and values for suggested in the CIRIA report range from
0.005 for places of public assembly, dams, etc. to 5 for offshore structures.
5.11.2 Comparison with Existing Practice
An assessment of the target reliability(s) should consider the actual consequences and nature of
failure, economic losses and the potential for human injury. Ideally, targets should be calibrated
with well-established designs that are known to have adequate safety, and actual failure
statistics are useful in the assessment if interpreted carefully.
A number of organisations have proposed target reliabilities for use in the absence of better
information for a wide variety of structures, including offshore platforms and pipelines. One of
the most comprehensive approaches, which has formed the basis of a number of codes including
DNV CN 30.6 [20], was developed by the Nordic Committee on Building regulations (NKB)
[60]. Targets were defined based on the consequences of failure and the failure type. Three
levels were defined for each, and the target failure probabilities are shown in Table 5.1.


64
Consequences of failure
Failure type
Less serious Serious Very serious
I Ductile failure with reserve
strength capacity resulting
from strain hardening
P
f
T
= 10
-3

(
T
= 3.09)
P
f
T
= 10
-4

(
T
= 3.71)
P
f
T
= 10
-5

(
T
= 4.26)
II Ductile failure with no
reserve capacity
P
f
T
= 10
-4

(
T
= 3.71)
P
f
T
= 10
-5

(
T
= 4.26)
P
f
T
= 10
-6

(
T
= 4.75)
III Brittle failure and instability P
f
T
= 10
-5

(
T
= 4.26)
P
f
T
= 10
-6

(
T
= 4.75)
P
f
T
= 10
-7

(
T
= 5.20)
Table 5.1 Acceptable values of annual failure probabilities (P
f
T
) (and reliability indices
(
T
)) from NKB [60]
5.11.3 Cost-Benefit Analysis
A cost-benefit analysis to set target reliabilities is usually only undertaken for a specific project,
rather than for a Code of Practice. The objective is to strike a balance between the costs and
expected benefits clearly lowering the target probability of failure improves the expected
benefits but requires a cost penalty in terms of increased fabrication and installation
specification, maintenance, etc. The risk balance is illustrated in Figure 5.7.
Costs include:
Capital costs and items requiring replacement
Installation and commissioning costs
Operating costs
Maintenance.
Similarly, the benefits may be identified as follows:
Reduced fatalities and injuries
Reduced environmental damage including clean-up costs
Increased availability of assets.

65
Risks carried
by
Stakeholder
Investment
by (costs to)
Stakeholder
Risks
imposed by
Stakeholder
Return
(benefit) to
Stakeholder

Figure 5.7 Stakeholder risk balance
Any stakeholder would always wish his return of benefits to be greater than his combined risks
and costs, or his investment would be pointless. In other words, the right hand side of the
balance should outweigh the left hand side.
5.11.4 Targets for Pipelines
For pipelines, the most recent and comprehensive assessment of target reliabilities has been
undertaken as part of the SUPERB project [61], which has been incorporated into the DNV
Rules for submarine pipelines [32]. The target failure probabilities from the DNV Rules are
reproduced in Table 5.2 below.
Safety Classes
Limit State Probability Basis
Low Normal High
SLS Annual per pipeline P
f
T
= 10
-2
P
f
T
= 10
-3
P
f
T
= 10
-3

ULS Annual per pipeline P
f
T
= 10
-3
P
f
T
= 10
-4
P
f
T
= 10
-5

FLS Lifetime per pipeline
1)
P
f
T
= 10
-3
P
f
T
= 10
-4
P
f
T
= 10
-5

ALS Annual per km
2)
P
f
T
= 10
-4
P
f
T
= 10
-5
P
f
T
= 10
-6

1) No inspection and repair is assumed, temporary and in-service conditions considered together
2) Refers to the overall allowable probability of severe consequences
Table 5.2 Target failure probabilities (P
f
T
) for submarine pipelines (from DNV [32])
Three points are worth noting regarding this table:
The use of Safety Classes,
The use of FLS and ALS (see Section 3.4.2),
The differing probability bases.
The safety class depends on three factors:
i. The phase of a pipelines lifetime under consideration, either temporary or operational.
Normally, for onshore pipelines only the operational phase is of significance.
66
ii. The location class. This category is defined for offshore subsea pipelines, and is thus
not directly applicable to onshore pipelines (although parallels could be developed
between open country and built-up areas).
iii. The category of the pipeline contents.

5.12 RISK ASSESSMENT
There are many definitions of risk. In the literature it is stated that no less than twenty
definitions of the concept of risk exist [62], including the common dictionary definition based
on danger of damages or losses.
Of relevance here is a definition based on a function of the probability of failure and the
consequences of failure. Conventionally, the function is defined as the product of the two
terms. Thus,
( ) es Consequenc P es Consequenc , P function Risk
f f
= = (5.24)
The consequences of failure for a gas pipeline may be primarily economic losses, but there is
the potential for human injury or loss of life; the consequences for pipelines carrying oil or other
hazardous substances may also involve damage to the environment.
One approach to treating different types of consequences is to define a utility function which
ranks different combinations of cost and life loss according to their perceived impact. By its
very nature, such an approach is highly subjective.
A much better approach, and one which is in keeping with the HSEs philosophy and the
Pipeline Regulations [44], is to seek to minimise the expected costs whilst constraining the
potential for life loss to be below an acceptable limit - an ALARP limit (As Low As considered
Reasonably Practicable). This is known as constrained optimisation. It is clear that the
constraint limits or target reliabilities for pipeline zones where there is potential for loss of life
should be particularly carefully assessed to ensure that they are ALARP.
Risk at a time t can be measured by the expected cost of failure, ( ) | | t C E
f
. The expected cost
of failure is given by:
( ) | | ( )
( )
( )
( )
t
f
m
1 i
f
t
f
f f
r 1
C
t P
r 1
C
t P t C E
i
i sys
+
=
+
=

=
(5.25)
where ( ) t P
sys
f
is the system probability of failure for time t,
( ) t P
i
f
is the probability of failure due to hazard or failure mode i for time t,
i
f
C is the consequential cost of failure (discounted to NPV) for hazard or failure
mode i,
r is the real rate of return,
and m is the total number of hazards or failure modes.
67
The expected cost of failure can be regarded as the average cost incurred through failure over a
long period of time.
It is useful to plot graphs of cost-based risk against time. On the same plots it is also useful to
show curves for each of the contributing failure hazards and/or failure modes.
In the early years of a structures life the consequences of failure are at a maximum, and (once
the structure settles down and stabilises) the failure probability is low. With time the
consequential costs of failure may fall, however the failure probability increases due to material
degradation etc. Typically, a plot of risk against time shows a bath-tub shape curve, with risk
falling initially, then levelling off, and after a time the risk starts to increase as the strength of
the structure deteriorates and failure becomes more likely.
68


69
6. USES OF RELIABILITY ANALYSIS AND PROBABILISTIC
METHODS
6.1 SUMMARY
This Chapter describes the main uses of structural reliability analysis and probabilistic methods.
Historically, and still one of the most important uses, is the calibration of safety factors in Codes
of Practice. Whilst some structures, or specific aspects, have been designed using probabilistic
methods, this is still very much a technique for prominent or nationally important structures,
high risk investments, or structures with very high consequences of failure. Probabilistic
methods have been used in assessing commercial risk for many years, and they are playing an
increasing role in decision making. An area where probabilistic methods are being used more
and more is in the definition and optimisation of inspection, maintenance and repair schemes.
6.2 SAFETY FACTOR CALIBRATION
The most significant application of probabilistic methods is in the calibration of the factors of
safety in Codes of Practice, in particular limit state codes. Typically, where a limit state format
code has been developed to replace a traditional working stress design code the objective of the
calibration has been to derive safety factors for the limit state code which achieve designs with
similar reliabilities to those inherent in designs to the working stress code.
A development of this application is the calibration of safety factors for the design of major
structures or bridges where the economic losses or loss of life would be significant.
Where a limit state code is introduced as a direct replacement to an existing working stress code
the choice of the target reliability is relatively straightforward, provided that the existing code is
considered to produce designs with acceptable reliability and economy. The target reliability is
then derived as follows.
1. The objective of the calibration is defined. This may involve evaluating targets for a
number of groups of different component types under different loading modes these
are referred to as the calibration classes.
2. A set of structural components is selected to reflect the range of components covered
by the code. The designs are then usually weighted to reflect their frequency of usage.
3. The components are designed to be fully utilised to the existing WSD code.
4. The probability of failure of each design is evaluated using structural reliability
methods.
5. The target reliability for each calibration class is then evaluated as the weighted
average of the failure probabilities. Alternative definitions are sometimes used;
including weighted average reliability indices, lower bound reliability index, or more
complex functions.
This basic process was followed for the calibration of the safety factors for steel design in the
UK limit state bridge code BS 5400 from the old allowable stress code BS 153. The target
failure probability was determined as the weighted average P
f
for selected component types
70
designed to BS 153; some component types (notably stiffened compression flanges) were not
included in the target assessment because they had not been shown to behave satisfactorily in
service [9]. The evaluated target of 0.63 10
-6
was then used to calibrate safety factors for all
component types in the new code using a mathematical optimisation procedure.
The advantage of this type of calibration is that the target probability can readily be considered
as notional because the calibration is undertaken on a like-for-like basis. Indeed, when this type
of calibration was originally undertaken mean value reliability methods were considered
adequate.
Unfortunately, it is not always possible or desirable to calibrate back to an existing code or
design practice. In such situations the target reliability must be selected using alternative
methods. Judgement will be necessary in selecting the target, and it is strongly advisable that
the evaluated failure probabilities should be obtained using the best available data, knowledge
and methodology.
6.3 PROBABILISTIC DESIGN
There are three levels at which structural safety may be treated.
Level 1: A semi-probabilistic design process in which the probabilistic aspects are treated
specifically in defining partial factors to be applied to characteristic values of loads
and resistances. A level 1 structural design is commonly called limit state design. It is
used as a practical method of incorporating reliability methods in the normal design
process, although the reliability aspects are transparent to the designer.
Level 2: A probabilistic design process with some approximation. In this process, the loads
and strength of materials and geometric properties are represented by known or
postulated distributions and some relative reliability level is accepted.
Level 3: A design process based upon full probabilistic analysis for the entire structural system.
Level 3 methods take into account the joint probability distribution for all of the load
and strength parameters and uncertainties in the analysis. They are used in special
circumstances where the environment is sensitive or where cost savings justify the
additional expense of complex analyses.
Probabilistic design is usually taken to mean Level 3 methods, although Level 2 methods are
often (unintentionally) used. Apart from modelling the basic variables and fully defining the
problem, the main difficulty is the choice of appropriate target reliabilities for the various limit
states.
6.4 DECISION ANALYSIS
Modern decision theory stems from pioneering work published by von Neumann &
Morgenstern in 1947. The theory provides a framework in which to judge or assess the best
alternative from a set of possible decisions. The judgement is generally made in terms of a
single utility function, which is most usually expressed as expected cost. Clearly, assessing the
various decisions depends on factors that are not known with certainty and involves subjective
judgement, and this is assessed using Bayesian theory.
71
The decision problem is often illustrated using a Decision Tree, and an example is shown in
Figure 6.1.
e
0
e
1
e
2
E
x
p
e
r
i
m
e
n
t
c
h
o
i
c
e

(
b
y
d
e
c
i
s
i
o
n

m
a
k
e
r
)
O
u
t
c
o
m
e
(
b
y
N
a
t
u
r
e
)
A
c
t
i
o
n

c
h
o
i
c
e
(
b
y

d
e
c
i
s
i
o
n
m
a
k
e
r
)
S
t
a
t
e
(
b
y
N
a
t
u
r
e
)
Experiment E Outcome Z Action A State Q Value U
a
1
a
2
a
3
q
1
q
2
q
3
z
z
1
z
2
z
3
a
1
a
2
a
3
a
1
a
2
a
3
q
1
q
2
q
3
q
1
q
2
q
3
U( z
3
a
2
q
2
e
1
)
U(
z
0
a
2
q
2
e
0
)
U(
z
3
a
2
q
2
e
2
)
4

Figure 6.1 Illustrative example of a Decision Tree
The problem for the decision maker is to:
to choose an experiment E, e.g. an inspection option, pipeline route, etc.
the outcome Z, is random or uncertain, e.g. an inspection result, Safety Case
assessment, etc.
based on the outcome the decision maker chooses an action A , e.g. repair option,
design option, etc.
this will result in a random outcome of nature , e.g. failure or not.
The chosen experiment and action, together with the outcome, determines a utility
value U, e.g. expected cost.
A major difficulty with the use of a single utility value on which to judge alternatives occurs
when loss of life or serious injury are involved. Whilst costs can be, and often are, assigned to
loss of life or injury, this is a difficult attitude for many to accept. It is also difficult to assign
72
monetary value to environmental consequences, particularly with the growth in the green
lobby.
The part of the analysis starting once the results of an experiment are known or chosen, and
involving the choice of an action and its random outcome, is known as a posterior analysis.
Whereas the complete analysis, where the choice and results of an inspection are still unknown,
is known as a preposterior analysis.
The theory associated with defining the conditional, marginal, prior and posterior probabilities
is well defined in a number of texts.
Decision analysis is widely used to assess commercial risk. One of the earliest uses was in the
Oil Industry; for example where an operator needed to assess the risks of drilling in a new field,
or to weigh up the development of one field against another.
Decision analysis is now used widely in considering the routing for pipelines. When reviewing
submissions for new pipelines the HSE apply their own assessment technique based on risk to
life along the proposed route.
6.5 RISK AND RELIABILITY-BASED INSPECTION, REPAIR AND MAINTENANCE
SCHEMES
A variety of qualitative and quantitative risk-based inspection strategies have been proposed and
used widely in a number of industries. In many areas, including the process industry, they are
generally referred to as risk-based inspection (RBI) schemes, or occasionally risk-informed
inspection schemes. In the offshore industry they are referred to as inspection maintenance and
repair (IRM) schemes.
In offshore structural engineering, risk and reliability techniques have been used for nearly a
decade to prioritise the inspection for the welded connections of steel jackets. A number of
schemes have also been proposed for the assessment of land-based and subsea pipelines. The
two main approaches are based on qualitative indexing and quantitative risk assessment;
approaches based on a combination of the two have also been suggested.
The American Petroleum Institute (API) have recently published a draft Recommended Practice
RP580 for risk-based inspection [63]. This is primarily for pressure containment systems,
pipelines, storage tanks and other process equipment.
6.5.1 Qualitative Indexing Systems
Qualitative risk indexing approaches are based on assigning subjective scores to the different
factors that are thought to influence the probabilities and consequences of failure. The scores
are then combined using simple formulae to give an index representing the level of risk. The
resulting indices for different components (or pipe zones, or failure modes, or hazards) can then
be ranked to determine components with the highest risk.
Clearly the main advantage of this approach is that it is very simple to apply. At most, a simple
spreadsheet is all that is required to undertake the indexing analysis.
However, there are a number of disadvantages with this approach:
73
the index does not give any indication of whether the risk associated with a particular
segment is unacceptable;
no guidance is provided as to whether any risk reduction action is necessary;
it is very difficult to calibrate the scoring and indexing system, and to validate the
results.
Indexing systems for pipelines
Typical IRM management strategies based on indexing systems consider the probability and
consequences of failure, and derive a criticality rating, or similar term, which takes into
account:
the design and operating condition;
corrosion prediction;
design life / remaining life;
economic significance to company;
population rating, etc.
Henderson [64], and Kaye [37] discuss a qualitative risk assessment procedure for pipelines
using a Boston square. Kayes matrix is shown in Figure 6.2; Henderson considers five
categories for probability and consequence. The matrix defines a ranking number which defines
the risk of the failure mechanism, where the lowest number is the least severe and the highest is
the most important.
Probability High 3 6 9

Medium 2 4 6

Low 1 2 3

Low Medium High
Consequence
Figure 6.2 Example of Boston square (from [37])
However, Kaye notes that risk ranking alone does not give any guidance on how risk may be
controlled, and does not show how inspection may help to manage these risks. In an attempt to
manage the risks, they both consider the value of the inspection and introduce a third dimension
to transform the Boston square into a Boston cube.
Having identified the high risk scenarios for each mode and mechanism on every section of the
pipeline the value of inspection is assessed. Henderson gives the following examples of
inspection value:
High value internal corrosion, which can be monitored closely by inspection
and measures taken to remedy the rate of decay;
Low value third party intervention, which cannot be (fully) monitored by
inspection as the event can occur immediately after inspection.
74
Thus, inspection criticality is defined as the product of failure probability, failure consequence
and inspection value.
6.5.2 Quantitative Risk Systems
Quantitative risk systems are based on estimating the level of risk by direct assessment of the
probability and consequences of failure. Depending on the sophistication of the approach, the
probability of failure may be estimated using historical failure rate data or advanced (structural)
reliability methods.
Most of the quantitative risk systems are based on Bayesian Decision Theory (see for example
[65]). As discussed in Section 6.4, this theory has been applied to a number of areas where it
offers a convenient framework for the inclusion of subjective information.
The terminology of decision theory is rather general; in the context of IRM schemes the terms
can be defined as:
an experiment corresponds to an inspection option (method/time)
an experiment outcome corresponds to an inspection measurement or result
an action corresponds to a repair or maintenance option
an outcome of nature corresponds to no failure, or loss of containment or serviceability
a utility corresponds to expected cost

One of the earliest and most significant applications has been to prioritise the subsea inspection
requirements for welded joints in offshore structures. Because of the safety implications and
potential savings, this application has been well researched, and since the pioneering work by
the Norwegians [66] a number of papers have been published on the topic. Quantative risk
systems have been proposed for pipelines [67], but this area is still in its infancy.

75
7. REQUIREMENTS FOR PROBABILISTIC ANALYSIS OF
PRESSURE VESSELS AND PIPELINE SYSTEMS
7.1 SUMMARY
This chapter discusses the input data and analysis models that are required for use with a risk
and reliability-based design or assessment, or an inspection, maintenance and repair (IRM)
planning procedure; the requirements are similar for all of these applications.
7.2 BASIC DESIGN DATA
The following basic design data are required:
as-built material and geometry data, and details of coatings, etc.,
fabrication data,
operating history (pressure temperature, contents, etc.) and any forecast changes,
previous inspection records,
records of previous interventions, maintenance and repairs.
In addition, for pipelines:
trenching method and burial details, if any,
soils data and topography.
Some of the data may be incomplete, and it is important that the system should be flexible
enough to allow for this.
Results of previous analyses or assessments may be useful for comparison, and details of the
present inspection schedule should also be included.
7.3 DEFINITION OF FAILURE MODES, LIMIT STATES AND TARGET
RELIABILITIES
The failure modes and limit states need to be defined, and all of the hazards that can affect the
pressure system (structure) need to be identified.
For each structure corresponding target reliabilities need to be agreed. The target reliabilities
are usually chosen to reflect the importance of the asset and the potential consequences of
failure.
Where life-safety is a consideration the targets should be particularly carefully defined, and the
ALARP principle should be used.
76
7.4 PROBABILITY ANALYSIS
7.4.1 Assessment of Hazard Likelihood of Occurrence
Failure rate data from generic databases are useful for preliminary or first-pass analyses, and
such data may be useful to store as default data for use in the absence of more specific
information.
However, for more detailed analyses, the likelihood of occurrence of a hazard is required for use
with reliability analysis to evaluate the probability of failure given that a hazard has occurred.
Many hazards are almost certain to occur, e.g. fatigue, corrosion, etc., and thus the likelihood of
occurrence is 1.0. For other hazards, particularly those affecting the Serviceability and Ultimate
Limit States, the likelihood may be judged subjectively, or if the results of more detailed
assessments are available these may be used instead. Likelihood data is particularly important
for the Accidental Limit State.
7.4.2 Failure Models
The structural reliability analysis for the various failure modes requires accurate models to
predict failure. For the most part, the failure models may be adapted from the existing
deterministic models used for assessment.
7.4.3 Basic Variable Statistics
The uncertainty and variability of the basic variables also needs to be assessed. Physical,
statistical and model uncertainty need to be accounted for. The modelling can be derived from
statistical analysis of available observations of the individual variables, and may provide mean,
standard deviation, correlation with other variables, and in some cases distribution type. Other
relevant Company and public-domain information may also be useful.
Basic variables include:
geometric parameters: e.g. wall thickness, diameter, etc.,
material parameters: e.g. SMYS, Youngs modulus, fracture toughness, etc.,
model uncertainties for the various failure models,
corrosion rates, etc.
Most of the variables are specific to individual structures. However, some of the variables are
more widely applicable, e.g. model uncertainties.
7.4.4 Component and System Reliability Analysis
Software to evaluate component reliability and system reliability is required. Since many of the
calculations involve evaluating the probability of intersection for a number of events, reliability
software capable of handling multiple constraints and finding the joint failure point directly,
would be an advantage.
7.5 CONSEQUENCE MODELS
7.5.1 Fire and Blast Analysis Results
For pressure vessels containing hazardous substances, and for high pressure gas lines, the
consequences of failure need to be considered. Separate quantified risk assessments may be
77
undertaken to assess the likelihood of leaked contents igniting, and fire and blast analyses may
be undertaken to assess the effects of various scenarios.
7.5.2 Economic Considerations
The costs of failure need to be assessed. As discussed above, the costs of failure should include:
loss of production,
non-delivery penalty charges,
loss of product,
environmental pollution clean-up/mitigation,
legal fees/fines,
negative publicity,
equipment damage,
property damage,
cost of replacement/repair.
7.5.3 Environmental Considerations
Potential pollution and environmental damage, particularly from oil pipelines, is a major
consideration in many areas of the world, and is becoming increasingly important. The costs
associated with public aversion to an environmental incident can far outweigh any direct clean-
up costs.
7.5.4 Life-Safety Considerations
Where life-safety is a consideration, the potential consequences may need to be carefully
evaluated using quantified risk assessments. The results should be treated separately from the
economic consequences.
7.6 INSPECTION METHODS, COSTS AND MEASUREMENT UNCERTAINTY
A database of information needs to be created containing details of all of the potential
inspection methods. The data should contain the following information:
each potential inspection method available, or in common use,
all of the defect types that each inspection method is capable of detecting,
for each of the above, an assessment of the measurement accuracy or uncertainty,
together with an assessment of the probability of detecting a defect,
for each method, an estimate of the present day costs of undertaking an inspection,
including the assessment and interpretation of the results.
7.7 MAINTENANCE AND REPAIR METHODS, AND COSTS
For each type of defect and level of damage an estimate of the expected repair or maintenance
costs is required. Where production needs to be de-rated or shutdown whilst the repair is being
undertaken, the costs associated with lost production should be included. Details of the likely
effectiveness of the repair or maintenance method should also be obtained or assessed.
78


79
8. CONCERNS WITH STRUCTURAL RELIABILITY AND RISK
ANALYSIS
8.1 SUMMARY
This Chapter considers some of the concerns that have been raised in the literature and
elsewhere in this report with both reliability analysis methods and risk assessment.
The criticisms discussed in this Chapter are all relevant, and it is important to be aware them in
order to be more aware of the limitations in the methodology and the uncertainty or lack of
confidence in the results.
8.2 CONCERNS WITH STRUCTURAL RELIABILITY ANALYSIS
In principle, the probabilistic/reliability approach, however incomplete the input, must at least
result in as adequate a design decision as a deterministic one. In practice this may not be so due
to:
Inadequate structural engineering models and data
Misuse of reliability methods.
There are a number of concerns with structural reliability and its evaluation that have been
raised in the past, some are of a fundamental nature, whilst others relate to specific problems.
Some of the most important include:
Inclusion of model uncertainty
The Tail sensitivity problem
Small failure probabilities
Validation
Notional versus true interpretation.
8.2.1 Inclusion of Model Uncertainty
Model uncertainty is caused by the use of simplified or idealised mathematical models that are
needed as operational tools in the reliability evaluation. By its very nature, model uncertainty is
very difficult to assess and model, and it is often omitted. However, in many cases, when model
uncertainty is allowed for it is an important influence on the evaluated probability.
If for some reason model uncertainty is omitted from a reliability analysis, the results must be
interpreted with great care.
The reliabilities must not in any way be considered as true reliabilities, and must not
be used in comparative risk analysis studies.
Reliabilities should not be compared between different failure modes (i.e. modes based
on different models).
System reliabilities should not be evaluated between different failure modes (i.e.
modes based on different models).
80
In fact, reliabilities should only be compared for the same failure mode using the same
mechanical model.
8.2.2 The Tail Sensitivity Problem
The Tail sensitivity problem is a classic concern with structural reliability; it involves the tails
of the distribution functions used to model the basic variables in a reliability analysis. In a
reliability analysis it is the probability content defined by the shape of the distribution tails that
most greatly influences the evaluated failure probability. By definition, data points laying in the
tail of the distribution are very unlikely to occur in a population of data.
Thus, even in the fortunate situation where a large data sample is available to define the
distribution for a basic variable, very few data points at the tail of the distribution influence the
modelling of the variable. It is often pointed out that statistics based on data at the centre of a
distribution carry no information about the extremes, and so extrapolation is inherently
untrustworthy. This is clearly an important concern when the physical mechanisms that govern
the shape of the extreme tails are usually different from those governing the central part of the
distribution.
In a well-constituted problem with well-defined basic variable models, tail sensitivity is rarely
a concern. However, in a particularly sensitive situation, or where the modelling for the most
sensitive variables is limited, the sensitivity of the reliability can be examined by using
different, valid probability distributions. Clearly, a valid distribution should fit the data well,
and should comply with any physical constraints or limitations.
8.2.3 Small Failure Probabilities
It has been pointed out that the typical probabilities of failure often evaluated from structural
reliability analyses have no conceivable physical meaning if interpreted in a frequentist manner.
Palmer, in commenting on a published failure probability of 10
-16
/km year for lengths of subsea
pipelines reaching a yielding limit state under a design factor of 0.72, points out that this
corresponds to one failure in 50 North Sea pipeline systems in operation since the universe
began 20 billion years ago [68].
A better interpretation is that failure of this pipeline solely as a result of yielding due to excess
pressure under normal operating conditions is extremely unlikely. It does not mean that failure
due to yielding is impossible. Rather than under normal conditions, failure due to yielding is
probably much more likely to occur as a result of a failure of a pressure relief valve, pressure
gauge, or other mechanical malfunction; where possible, the likelihood of such malfunctions
should be included in the assessment. (Failure may also occur as a result of a human error
during a maintenance operation; the likelihood of such errors is much more difficult to predict,
and are rarely included in reliability analyses.)
Of course, in a real pipeline, there will also be a number of other failure modes with much
higher probabilities of failure, and these will totally dominate any evaluation of system
reliability. Effort should be concentrated on improving confidence in the probabilities for these
other failure modes, rather than being too concerned about highly unlikely causes of failure.
8.2.4 Validation
The generally small failure probabilities for real structures mean that evaluated failure
probabilities cannot be properly and completely validated. To do so, it would either be
necessary to observe a small number of similar structures for a very long period, or to observe a
81
very large population of structures for a shorter more practical period. Unfortunately, whilst
this is to some extent possible for manufactured items, it is not possible for structures and most
pressure systems which are typically one-off items, thus there is no population of nominally
identical structures under nominally identical conditions that might be observed.
However, for most types of structure there is a large enough population of similar type of
structure to at least be able to provide some crude comparative values. It may also be possible
to calibrate the failure models from test data, as discussed in the next section.
From a philosophical viewpoint, the notion of an evaluated failure probability is unscientific
since it is not open to a 'test of falsification', i.e. it cannot be disproved. In this sense reliability
analysis methods are engineering rather than scientific tools.
8.2.5 Notional Versus True Interpretation
In reliability analysis, quantities are omitted either intentionally, e.g. human errors, or
unintentionally because of a lack of data, or full understanding of the system. For these reasons,
those points discussed above, and others, the failure probabilities evaluated from structural
reliability analyses should be considered as notional. Therefore, answers from reliability
analyses are specific to the particular analysis and are dependent on the model, the assumptions,
and the input data.
Nevertheless, the results of reliability analyses are still of value. Structural reliability theory can
be used to estimate failure probabilities for events and structures for which there are no
historical data, or statistically useful data. In the absence of any other data, this information is
still of use in QRA-type assessments, provided that the limitations are understood and
accepted.
The (Draft) API RP 580 for risk-based inspection [63] suggests that a calculated failure
probability can be calibrated to a generic failure frequency by adjusting the input data to the
reliability analysis so that an acceptable level of damage corresponds to the generic failure
frequency. The calibrated reliability analysis model can then be used to calculate failure
frequencies for higher damage states. This approach is not always possible, but is clearly a very
good approach for combining reliability analysis results with generic data.
8.3 CONCERNS WITH RISK ASSESSMENT
There are a number of concerns with QRA, particularly as it is applied in the UK.
Applicability of generic data
Risk aversion
The treatment of numerical uncertainty (especially as an input to decision making)
The use of deterministic consequence models
The pro forma approach to risk assessment
Completeness.
8.3.1 Generic Data
A constant criticism with QRAs is the applicability of generic data. By its very nature, failures
of structural components and components of pressure systems, and corresponding failure data
82
are rare. Often, to obtain sufficient data to assess a failure rate for a specific application it is
necessary to consider very broad categories.
The drawback is then that the instances of failure may have occurred for circumstances that
have very little in common with the intended application. This is particularly important when
there has been a significant change in design or construction practice there may have been a
number of failures for structures designed and built to the earlier practices, which are no longer
applicable. Where practices have been improved, the use of historic failure statistics is
conservative, but this may not be so in cases where modern safety factors have been relaxed
(perhaps because of satisfactory past performance).
Thus, rather than simply relying on published statistics of failure, it is important to be aware of
the background and limitations of the data, and the circumstances of the failures. Where
possible, the data should be screened for applicability.
8.3.2 Risk Aversion
Risk aversion can be defined as a disproportionate perception of the risk due to either the
magnitude or nature of the consequences. An event that results in 100 deaths seems far worse to
society than one resulting in 1 death, even if the likelihood of the former is over 100 times less.
Traditionally aversion, when dealt with in an explicit way, is accounted for by weighting the
risk targets. This can have the effect of making events that were judged to be acceptable
without aversion become intolerable. The redefinition of the target criterion to account for
aversion is arbitrary, and the outcome can be significant, since intolerability is required to be
dealt with at any cost under UK law.
A more recent methodology has been proposed which scales up the assessed consequences by
some numerically systematic method.
8.3.3 Numerical Uncertainty and Reproducibility
Clearly, QRA is a method for predicting future events based on uncertainty. Given large
amounts of statistically applicable data, it is possible to evaluate meaningful statistical
uncertainties based on the results of simple QRA type analysis. However, when dealing with
large, complex, one-off systems the statistical confidence that there may be about any numerical
result will be low. The results of QRA are usually presented as a point estimate, and take no
account of the uncertainty or confidence in the result. The traditional approach to overcome this
is to take gross, pessimistic and conservative assumptions in evaluating the risk.
One could question the rationality of comparing very uncertain guesses on absolute probability
values with arbitrarily chosen limit values.
In addition, many aspects that input into a QRA are often highly subjective, and the results can
be very sensitive to the analysts assumptions. The same problem, using identical data and
models, may generate widely varying answers when analysed by different experts.
This is a widely recognised shortcoming of QRA methods; to minimise some of the problems
specialist practitioners in the field of application should be used.
83
8.3.4 Deterministic Consequence Models
QRAs usually involve the assessment of the likelihood of various consequences occurring. In
most cases for pressure systems these consequences are determined using standard models for
thermal radiation, gas outflow rates, jet flame lengths, etc. However, following experience in
the offshore industry, it is recognised that in some cases more complex analyses are required to
be able to characterise the consequences. Even detailed computational fluid dynamics analyses
and other sophisticated models do not compare well with full scale tests. The main reason is
that the results are very sensitive to the input parameters, such as the precise description of
surrounding geometry, etc.
Increasingly, there is awareness of the nonlinearity of the physical process, where accurate
prediction of physical characteristics of events such as peak explosion overpressure may never
be possible with meaningful certainty [69]. Thus, for explosions and some other phenomena, it
may be necessary to move away from a deterministic approach to consequence modelling; in the
offshore industry considerable attention is now being given to evaluating how such scenarios
may be included probabilistically in QRA models.
8.3.5 The pro forma approach
The increasing trend towards formalised risk assessment should ensure that engineers address
risks. However, the excessive reliance on codification and oversimplification may actually lead
engineers to address risks with a narrow mind [70]. The HSE report on the Heathrow tunnel
collapse [71] highlights the danger with its observation, The pro forma approach [to risk
assessment] kept the focus on routine worker safety. It did not encourage the strategic
identification of high-level engineering issues essential to the success of NATM, such as the
major hazard events and their prevention. These comments are directed at risk assessment in
construction, but they are equally applicable to design and reassessment.
8.3.6 Completeness
There can never be a guarantee that all accident situations, causes, and effects have been
considered. Indeed, many famous failures have occurred because the scenario was not and
could not have realistically been envisaged.
It is important to undertake a rigorous hazard identification studies.
84


85
9. GUIDELINES FOR RELIABILITY AND RISK ANALYSIS
9.1 SUMMARY
This Chapter presents guidelines for regulators and industry to assist in assessing work which
incorporates risk and reliability-based analysis and arguments. The guidelines may also be of
use to consultants in how to undertake and present risk and reliability analysis.
To make effective use of the guidelines requires a basic level of understanding and
familiarisation with the principles of risk and reliability analysis; cross-references are given to
explanation elsewhere in this report and where references to more comprehensive accounts may
be given; a Glossary of terms is also given in Chapter 10. However, to answer some of the
questions, and to fully appreciate the implications of the methodology and assumptions,
specialist support may be needed.
The most important considerations when assessing or presenting an analysis, particularly one
based on probabilistic arguments, are that:
the basis of the analysis and all of the assumptions are clearly explained and
understood,
when comparisons are being undertaken they are done so on a compatible basis, e.g.
annual failure probabilities with annual failure statistics, actuarial statistics with true
reliabilities, target reliabilities for the system with system reliabilities, etc.
9.2 GUIDELINES
Guidelines for the most important requirements that should always be present in any reliability
analysis or reliability-based risk analysis are presented in Figure 9.1. These should be
considered mandatory requirements, which should be satisfactorily answered before the
outcome is accepted.
The guidelines are expanded and explained in more detail in Figure 9.2; depending on the
circumstances, some of the second level points may not be applicable.
The guidelines have been used in the assessment of the case studies.
86
Have sensitivity analyses been undertaken,
and are the results presented?
Is the reliability analysis methodology
adequate, and has the correct mathematical
solution been found?
Is the probabilistic modelling of the basic
variables adequately justified, and have all
sources of uncertainty, including model
uncertainty, been considered?
Is the failure function modelling adequate
for predicting failure?
Is the problem clearly explained, adequately
defined, and well understood?
Does the analysed problem provide a
complete solution to the real physical
problem?
Determine which are the most
sensitive variables and parameters
entering the problem, and check
the adequacy of their modelling
Question whether independent
checks have been undertaken
Question adequacy and
completeness of probabilistic
modelling
Question accuracy of failure
function modelling
Seek further explanation or
definition
Question misgivings and
relevance of solution, and
significance of other effects
Question assessment of
consequences of failure
Have all the consequences of failure been
adequately considered?
Question confidence in result, and
acceptability of solution
Accept
Does the stated acceptance criterion represent
a reasonable and responsible level of safety,
and, bearing in mind the confidence in the
reliability analysis, is it adequately satisfied?
Are there still doubts or misgivings about the
validity of the outcome of the analysis?
Are the
answers to the above
questions yes?
Return to the beginning in
case information and changes
impact on anything else
N
Y
Y N
Figure 9.1 Level 1 Mandatory guidelines
87
Is the problem clearly explained, adequately defined,
and well understood?
Is the failure event(s) that has been considered
clearly defined and understood?
Seek further explanation
or definition
Is it clear what definition of failure has been
used? i.e. Serviceability or Ultimate Limit
State, leak or rupture, etc. (see Section 3.4.2)
Is it clearly understood what the reference
period is for the reliability analysis, and is this
applied consistently throughout the analysis?
(see Section 4.4)
Seek further explanation
or definition
Seek further explanation
or definition
Is it clear whether the event(s) corresponds to
part of a failure sequence, or complete failure
of a component or pressure system?
Seek further explanation
or definition
Does the analysed problem provide a complete
solution to the real physical problem?
Is the failure function modelling adequate for
predicting failure?
Question if one is needed
Has a formal hazard identification process been
undertaken? (see Section 5.3 and 5.4)
Are there other hazards and/or failure events
that may have significant failure probabilities or
consequences? Have high consequence (low
probability) events been considered?
Has the most significant failure mode(s) for the
event(s) been considered?
Question significance of
other events. Question
the controls to limit
handling/operations/
maintenance errors
Question significance of
other modes
Have time-varying effects been considered? i.e.
fatigue, corrosion, etc. (see Section 5.10)
Question significance of
(other) time-varying
effects
Are there any other significant combinations of
loads? e.g. pressure/termperature combinations
Question significance of
(other) time-varying
effects
Figure 9.2 Level 2 Detailed guidelines
88
Is the failure function modelling adequate for predicting
failure?
Question accuracy of failure
function
Is the failure function based on generally accepted
principles and assumptions? (see Section 5.6)
Question failure function
modelling
Is the accuracy of the failure function
adequate?
Is the failure function valid throughout the
basic variable region, in particular the beta-
point(s) (if using FORM/SORM)?
Are there physical limitations or other
mechanisms that may affect regions of the
basic variable space, in particular the failure
region?
Question limitations of
failure function modelling
Question limitation of
failure function modelling
Is the probabilistic modelling of the basic variables
adequately justified, and have all sources of uncertainty,
including model uncertainty, been considered?
Are all of the terms in the failure function
clearly defined and explained?
Seek clarification of failure
function modelling
Have all of the random variables been
identified? Is the uncertainty low for all of the
variables treated as deterministic? (see Section
5.7)
Is the basis of the probabilistic modelling for
each basic variable adequately defined?
For each basic variable, has there been
sufficient effort to identify sources of
applicable data? Are the data sources
unbiased? Are the data representative of the
problem? Have the data been screened?
Are any of the variables correlated, and has the
correlation been adequately considered?
Has the modelling of the model uncertainty
variable(s) been adequately justified? (see
Section 8.2.1)
Question completeness of
basic variable modelling
Seek clarification of basic
variable modelling
Question basic variable
modelling
Question basic variable
modelling
Question basic variable
modelling
Is the reliability analysis methodology adequate, and has
the correct mathematical solution been found?
Figure 9.2 (continued) Level 2 Detailed guidelines
89
Question reliability
analysis methodology
Has an accepted method been used to evaluate
the failure probability? i.e. FORM, SORM or
Monte Carlo. (see Section 5.8)
Is the reliability analysis methodology adequate, and has
the correct mathematical solution been found?
Question reliability
analysis software
Has a commercial reliability program been
used, or has the the software been adequately
validated?
Have all the consequences of failure been adequately
considered?
Have sensitivity analyses been undertaken, and are the
results presented?
Have independent checks been undertaken
(using an alternative method)? Also have the
accuracy of FORM results been checked with
SORM or Monte Carle etc?
Has the robustness of the solution been
demonstrated? i.e. by varying the input
parameters slightly, by using different (valid)
distribution types, etc?
Question reliability
analysis
Question reliability
analysis
Question reliability
analysis
If FORM/SORM has convergence to the 'correct'
answer been checked (by using alternative search
methods, staring positions ,etc.?
If FORM/SORM have the values of the variables at
the beta-point (design-point) been presented? (See
Section 5.8.2)
If FORM/SORM are the values of the basic
variables at the beta-point physically feasible?
Ask for reliability output
information
Question validity of the
reliability analysis
Have basic variable sensitivities (alpha
coefficients) been presented and considered?
Is the ranking of the basic variable sensitivities
as expected?
Have parametric sensitivities been evaluated,
presented, and discussed?
Is there sufficient confidence in the modelling
of the most sensitive parameters/variables
Ask for sensitivity
information
Question validity of the
reliability analysis
Ask for sensitivity
information
Question basic variable
modelling
Figure 9.2 (continued) Level 2 Detailed guidelines
90
Have all the consequences of failure been adequately
considered?
Have sufficient analyses been undertaken to
assess the consequences of failure. (see
Section 5.12 and 7.5)
Question consequence
analysis
Question confidence in result,
and acceptability of solution
Does the stated acceptance criterion represent a
reasonable and responsible level of safety, and bearing
in mind the confidence in the reliability analysis, is it
adequately satisfied?
Are there still doubts or misgivings about the validity
of the outcome of the analysis?
Can an event escalate? i.e. have high
consequence low probability events been
considered
Question consequence
analysis
Have future growth/changes been considered in
the consequence analysis?
Question consequence
analysis
What is the basis of the acceptance criterion?
Does it represent a significant change from safety
levels currently accepted for similar systems, and
in other industries? (see Section 5.11)
Does the acceptance criterion fully consider the
possible consequences of failure?
Is the basis of the acceptance criteria
compatible with the analysis approach? i.e.
if the acceptance criterion is based on actuarial
statistics, has the 'true' probability(s) been
evaluated? (see Section 8.2.5)
Question acceptance
criterion
Bearing in mind the confidence in the analysis
results, the influence of unknown or ignored
effects, human errors, etc, is there an adequate
margin between the acceptance criterion and
the evaluated results?
Question acceptance
criterion
Question acceptance
criterion
Consider the question - is it likely that a competent
engineer with knowledge of reliability analysis
would have achieved a different outcome?
Go through the flowchart
again if you think the answer
is likely to be yes.

Figure 9.2 (continued) Level 2 Detailed guidelines
91
10. GLOSSARY
Basic variable A set of variables entering the failure function equation to
define failure. They may include basic engineering
parameters, such as wall thickness, yield stress, etc., as
well as model uncertainty in the failure function itself.
Beta-point, -point, (design-point) The point on the failure surface that is closest to the
origin in U-space. It is also the point with maximum
probability density, and values of the basic variables at
this point represent the most probable values to cause
failure.
CoV (Coefficient of Variation) The ratio of standard deviation to mean value of a
variable.
Expected value, E[ ] The mean value of a variable. It is defined as the first
moment of the distribution function of a variable, and is
evaluated from the distribution function f
X
(x):
| | ( ) dx x f x X E
X


=
Failure function, Z The failure function in a reliability analysis is a
mathematical function used to predict the failure event
for a component, part of a structure, or a structural
system. The failure function is expressed in terms of the
basic variables, and is defined such that Z 0
corresponds to failure.
Limit State design A design method in which requirements are defined for
structural performance or operation. Such requirements
may include Ultimate (ULS) and Serviceability (SLS)
Limit States. Limit States can be defined as a specified
set of states that separate a desired state from an
undesirable state which fails to meet the design
requirements.
Model uncertainty The inherent uncertainty associated with the
mathematical models used to predict resistance (and
loading).
Probability of failure, P
f
The probability of failure of an event is the probability
that the limit state criterion or failure function defining
the event will be exceeded in a specified reference
period.
92
Probability density function, pdf The probability that a random variable X shall appear in
the interval [x, x+dx] is f
X
(x) dx, where f
X
(x) is the
probability density.
Reference period Reliabilities and probabilities of failure should be defined
in terms of a reference period, which may typically be
one year or the design life.
Reliability The probability that a component will fulfil its design
purposes. Defined as 1 P
f
.
Reliability analysis There are a number of techniques to evaluate failure
probability, or reliability. These include: numerical
integration, iterative procedures to evaluate first- or
second-order estimates of P
f
, Monte Carlo simulation and
a number of variance reduction techniques.
Reliability Index, A useful measure to compare P
f
s. It is defined using the
standard normal distribution function ( ) ,
( )
f
1
P 1 =


Sensitivity coefficient, -factors The sensitivity coefficients reflect how sensitive the
reliability is to the basic variables. The term importance
factors is sometimes used; importance factors are defined
as the square of the -factors
Standard deviation, Sd[ ] The standard deviation is defined as the square root of the
Variance of a variable.
Standard normal space, U-space A space of independent normally distributed random
variables with zero mean and unit standard deviation.
Basic variable space is transformed into standard normal
space in some reliability analysis procedures.
Target A target probability is used to judge reliabilities. It may
be defined by using data from designs known to perform
satisfactorily, by expert judgement, by value analysis, or
taken from norms in standards.
Variance, Var[ ] The variance of a variable is defined as the second central
moment of the distribution function of a variable, and is
evaluated from the distribution function f
X
(x):
| | ( ) ( ) dx x f x X Var
X
2
X


=
where
X
is the mean or expected value.

93
11. REFERENCES

1 Ditlevsen O & Madsen H O. Structural reliability methods. John Wiley & Sons,
Chichester, 1996.
2 Pugsley A G. The safety of structures. Arnold, London, 1966.
3 Freudenthal A M. The safety of structures. ASCE Transactions, Vol. 112, 1947.
4 Ditlevsen O & Madsen H O. Proposal for a Code for the Direct Use of Reliability
Methods in Structural Design. Joint Committee on Structural Safety, CEB CECM
CIB FIP IABSE IASS RILEM, 1989 (reprinted in [1]).
5 Fairbairn W. An account of the construction of the Britannia and Conway tubular
bridges. London, 1849.
6 Heyman J. Structural analysis A historical approach. Cambridge University Press,
1998.
7 Report on structural safety. Journal of Institution of Structural Engineers, London,
1955.
8 Ferry Borges J. Basic concepts of structural design. In Probabilistic Methods for
Structural Design, Edited by C Guedes Soares, Kluwer, Dordrecht, 1997.
9 Thoft-Christensen P & Baker M J. Structural reliability theory and its
applications. Springer-Verlag, Berlin, 1982.
10 Comit Europen du Bton. Recommendations for an International Code of Practice
for reinforced concrete. Cement and Concrete Association, London, 1964.
11 Joint Committee on Structural Safety, CEB CECM CIB FIP IABSE IASS
RILEM. International System of Unified Standard Codes for Structures. Vol. 1,
Common Unified Rules for Different Types of Construction and Material. CEB/FIP,
1978.
12 ISO 2394. General Principles on Reliability for Structures, June 1998.
13 Ellingwood B, Galambos T V, MacGregor J G & Cornell C A. Development of a
probability based load criterion for American National Standard A58. National
Bureau of Standards, NBS 577, June 1980.
14 Ravindra M K & Galambos T V. Load and Resistance Factor Design for steel. J
Struct Div, ASCE, Vol. 104, No. ST9, September 1978.


Significant references are shown in bold.
94

15 NPD. Regulations for the Design of Fixed Structures on the Norwegian Continental
Shelf. Norwegian Petroleum Directorate, Stavanger, 1977.
16 CAN/CSA-S471. Code for the Design, Construction and Installation of Fixed
Offshore Structures General Requirements, Design Criteria the Environment, and
Loads. Canadian Standards Association. June 1992.
17 Moses F. Program notes prepared in cooperation with API PRAC-22 Project tutorial
RP2A-LRFD. March 1989
18 API RP2A-LRFD. Recommended Practice for planning, designing, and constructing
fixed offshore platforms Load and Resistance Factor Design. American Petroleum
Institute, Washington DC, 1st Edition, August 1993.
19 ISO Code 13819-1, Petroleum & Natural Gas Industry - Offshore Structures, Part 2,
Fixed Steel Structures, Draft, May 1999. (To be redesignated ISO 19902).
20 DNV CN 30.6. Structural reliability analysis of marine structures. Det Norske
Veritas. July 1992.
21 Joint Committee on Structural Safety. Probabilistic Model Code.
http://www.jcss.ethz.ch. March 2001.
22 Moan T & Holland I. Risk assessment of offshore structures experiences and
principles. Proc 3rd ICOSSAR, 1981.
23 NPD. Guidelines for Safety Evaluation of Platform Conceptual Design. Norwegian
Petroleum Directorate, Stavanger, 1981.
24 Cullen, The Hon. Lord. The Public Inquiry into the Piper Alpha Disaster. HMSO,
London, 1990.
25 Blockley D I. The nature of structural design and safety. Ellis Horwood.
Chichester, 1980.
26 DNV Rules for the Classification of Fixed Offshore Installations. Det Norske Veritas.
1998.
27 Joint Committee on Structural Safety, CEB CECM CIB FIP IABSE IASS
RILEM. First Order Reliability Concepts for Design Codes. CEB Bulletin No 112,
1976.
28 BS 5950 Part 1. Structural Use of Steelwork in Building. 1990.
29 ISO 2394. General Principles for the Verification of the Safety of Structures,
February 1983.
30 BS 5400 Part 1. Steel, Concrete and Composite Bridges. 1988.

95

31 Eurocode Pre-standard ENV 1991-1, Eurocode 1: Basis of design and actions on
structures, Part 1 Basis of Design, 1994 (CEN/TC 250). 1991.
32 DNV Rules for Submarine Pipeline Systems. Det Norske Veritas. December 1996;
with amendments and corrections, May 1998.
33 NORSOK Standard N-001. Structural Design. October 1997.
34 Draft Eurocode prEN 13445-3, Unfired Pressure Vessels Part 3: Design. July 1999,
35 Oude Hengel J J M. Limit State design in Pipeline Codes. Proc. Conf. on Risk &
Reliability & Limit States in Pipeline Design & Operations, IBC Technical Services,
Aberdeen, May 1996.
36 Zimmerman, T. Limit State Design of Pipelines North American Developments.
Proc. Conf. on Risk Based & Limit State Design & Operation of Pipelines, IBC UK
Conferences Ltd., Aberdeen, 1997.
37 Kaye D. Optimisation of pipeline inspection using risk and reliability analysis.
Proc. Conf. on Risk & Reliability & Limit States in Pipeline Design & Operations,
IBC Technical Services, Aberdeen, 1996.
38 Thomas F G. Basic parameters and terminology in the consideration of structural
safety. CIB Bulletin No. 3, 1964.
39 Health and Safety at Work Act, 1974. HMSO, London, UK.
40 The Control of Major Accident Hazards Regulations, 1999. HMSO, London, UK.
41 The Pressure Systems Safety Regulations, 2000. SI 2000/128. HMSO, London, UK.
42 The Simple Pressure Vessels (Safety) Regulations, 1991. SI 1991/2749. HMSO,
London, UK.
43 The Simple Pressure Vessels (Safety) (Amendment) Regulations, 1994. SI
1994/3098. HMSO, London, UK.
44 The Pipelines Safety Regulations, 1996. SI 1996/825. HMSO, London, UK.
45 The Offshore Installations (Safety Case) Regulations, 1992. SI 1992/2885. HMSO,
London, UK.
46 The Offshore Installations (Prevention of Fire and Explosions and Emergency
Response) Regulations, 1995. SI 1995/743. HMSO, London, UK.
47 The Offshore Installations and Pipeline Works (Management and Administration)
Regulations, 1995. SI 1995/738. HMSO, London, UK.
48 The Offshore Installations and Wells (Design and Construction, etc) Regulations,
1996. SI 1996/913. HMSO, London, UK.

96

49 Matousek M. Outcome of a survey of 800 construction failures. Proc. IABSE
Colloq. On Inspection and Quality Control. Swiss Federal Inst. of Technology,
Zrich, 1977.
50 WOAD. Worldwide Offshore Accident Databank. Veritec, Oslo, 1990.
51 Beeby A W. Safety of structures, and a new approach to robustness. The Structural
Engineer, 77, No. 4, 1999.
52 Efron B. Bootstrap methods: another look at the jackknife. Annals of Statistics Vol
7, No. 1, 1979.
53 Hasofer A M & Lind N C. Exact and invariant second-moment code format. J Eng
Mech Div, ASCE, 100 (EM1), February 1974.
54 Ang A H S & Tang W H. Probability concepts in engineering planning and design.
Vol II, Decision, risk and reliability. John Wiley & Sons, New York, 1984.
55 Ditlevsen O. Uncertainty modelling. McGraw-Hill, New York, 1981.
56 Madsen H O, Krenk S & Lind N C. Methods of structural safety. Prentice-Hall,
Eaglewood Cliffs, N.J, 1986.
57 Melchers R E. Structural reliability, analysis and prediction. Ellis Horwood,
Chichester, 2
nd
Edition, 1999.
58 Thoft-Christensen P & Mourotsu Y. Application of structural systems reliability
theory. Springer-Verlag, Berlin,1986.
59 CIRIA. Rationalisation of Safety and Serviceability Factors in Structural Codes.
Construction Industry Research and Information Association, Report 63, 1977.
60 Nordic Committee on Building Regulations. Recommendations for Loading and
Safety Regulations for Structural Design. NKB-Report No. 36, Nov 1978.
61 The SUPERB Project: Reliability-based Design Guideline for Submarine Pipelines.
Sotberg, T, Bruschi, R and Mrk, K. Proc. 28th Offshore Technology Conf. (OTC),
Houston, 1996.
62 Kafka P. How safe is safe enough? An unresolved issue for all technologies.
Proc. Of 10th ESREL Conf., Munich, September 1999.
63 API Draft RP580. Risk-based inspection. American Petroleum Institute,
Washington DC, Draft 1.1 Edition, October 1999.
64 Henderson P A. Engineering and managing a pipeline integrity programme. Proc
Conf on Risk & Reliability & Limit States in Pipeline Design & Operations, IBC
Technical Services, Aberdeen, 1996.

97

65 Benjamin J R, & Cornell C A. Probability, Statistics, and Decision for Civil
Engineers. McGraw-Hill, 1970.
66 Madsen H O, Skjong R K, Tallin A G, & Kirkemo F. Probabilistic fatigue crack
growth analysis of offshore structures with reliability updating through inspection.
Proc. of Marine Structural Reliability Symposium, SNAME, Arlington, Virginia,
1987.
67 Turner R C, Wicks P J, Bolt H M & Smith J K. Risk, reliability and cost
considerations in pipeline inspection a new approach. Proc. Conf on Risk Based &
Limit State Design & Operation of Pipelines, IBC UK Conferences Ltd., Aberdeen,
1997.
68 Palmer A. The limits of reliability theory and the reliability of limit state theory
applied to pipelines. Proc. 28th Offshore Technology Conf. (OTC), Houston, 1996.
69 Nishapati M. The acceptability and applicability of quantified risk assessment in the
next Millennium. Proc. 17th OMAE Conf., 1998.
70 SCOSS. Structural Safety 2000-01. The thirteenth report of SCOSS, The Standing
Committee on Structural Safety, The Institution of Structural Engineers, May 2001.
71 HSE. The collapse of NATM tunnels at Heathrow Airport. Health and Safety
Executive. HMSO, London, UK, 2000.
98
99
ANNEX A
CASE STUDY 1
PIPELINE DESIGN PRESSURE UPGRADE
100
CONTENTS

Page No.
1. INTRODUCTION AND OUTLINE OF SAFETY CASE 102
1.1 INTRODUCTION 102
1.2 OUTLINE OF THIS ANNEX 102
2. RELIABILITY ANALYSES 103
2.1 PREAMBLE 103
2.2 FAILURE MODES AND FUNCTIONS 104
2.2.1 Failure Modes 104
2.2.2 Critical Gouge Depth 105
2.2.3 Critical Gouge Length 106
2.3 INPUT PARAMETERS AND DISTRIBUTIONS 107
2.3.1 List of Parameters and Distribution Specification 107
2.3.2 Special Cases 107
2.4 ANALYSES 109
2.4.1 Software Used 109
2.4.2 Failure Surface 111
2.4.3 Analyses Performed 112
2.5 RESULTS OBTAINED 114
2.5.1 Baseline Analyses at Pressures of 7.0 and 9.0 MPa 114
2.5.2 Pressure variation 6.5 to 11.0 MPa 117
2.5.3 Changes in Characteristics of Probability Density Function for Gouge
Length 120
2.5.4 Changes in Characteristics of Probability Density Function for Gouge
Depth 121
2.5.5 Effects of Model Uncertainty 121
3. DISCUSSION AND CONCLUSIONS 155
3.1 INTRODUCTION 155
3.2 CORRESPONDENCE WITH GUIDELINES 155
3.2.1 Preamble 155
3.2.2 Problem Definition 155
3.2.3 Problem Analysis 156
3.2.4 Failure Function Modelling 156
3.2.5 Basic Variable Modelling 157
3.2.6 Reliability Analysis Methodology 157
3.2.7 Sensitivity Analyses 158
3.2.8 Analysis Outcome Validity 159
3.3 PROBLEM SPECIFICS 159
3.3.1 FORM versus SORM 159
3.3.2 Importance of Pressure Variation 160
3.3.3 Sensitivity and Robustness of Analysis Outcomes 160
3.4 CONCLUSIONS 162
4. REFERENCES 163
101

102
1. INTRODUCTION AND OUTLINE OF SAFETY CASE
1.1 INTRODUCTION
The purpose of this annex is to present the results of a case study into the reliability analysis of a
particular problem. The problem is the leak or rupture of an internally pressurised pipe
containing longitudinal gouges resulting from external interference. The external interference
may be impact from earth-moving equipment; the longitudinal gouges may or may not be
contained in a dent.
The particular question addressed (in reliability terms) is, given the possibility of this type of
hazard and the resulting damage, what are the changes in failure probability as internal pressure
is increased? In a sense, the particular probabilities of failure calculated are not the only
important issues. Others exist in relation to the guidelines for reliability analysis and reliability-
based risk analysis. The purposes of this case study, therefore, are not only to illustrate the use
of reliability techniques on the particular problem concerned, but also to bring out issues in
relation to the guidelines within a wider context.
1.2 OUTLINE OF THIS ANNEX
There are four sections to this annex, including this introduction. Tables and figures are
grouped at the end of each section.
Section 2 deals with the reliability analysis. The source problem is outlined in Subsection 2.1.
Subsection 2.2 then concentrates on the subset of the problem that is subject to the reliability
analysis in this case study. It describes the failure modes considered and their functions. The
input parameters and their respective data are described in Subsection 2.3; this includes the
specification of the variables assigned a statistical distribution. Special cases where non-
standard or "unusual" distributions are used are dealt with.
The actual analyses are covered in Subsection 2.4. This covers the proprietary reliability
analysis software used, along with a description and the results of a deterministic analysis of the
failure surface. These are an extremely important aid to understanding the results of the
reliability analyses. A description of the set of reliability analyses performed is then given.
Finally in Subsection 2.5, the results of all the reliability analyses carried out are reported.
Discussion and conclusions are given in Section 3. This is split between a correspondence of
the activities in the case study with the guidelines, set out in Subsection 3.2; and issues related
to the specifics of the problem, in Subsection 3.3.
Section 4 contains the references.
103
2. RELIABILITY ANALYSES
2.1 PREAMBLE
The particular case study considered here is taken from a report produced by the then BG
Technology (now Advantica Technologies Limited) supplied to BOMEL by HSE [1]. The
document forms part of work related to a Joint Industry Project (JIP) to develop guidance for
limit state, reliability and risk-based design and assessment of onshore pipelines. For brevity,
throughout this report the BG Technology document will be referred to as the "source
document".
The source document describes an example of the application of limit state, reliability and risk-
based design techniques to the uprating of onshore pipelines. The work stemmed specifically
from a study of the feasibility of uprating approximately 400 km of high-pressure gas
transmission pipelines. The pipelines under consideration are 914.4mm x 12.7mm wall
thickness, API 5L grade X60 material. At the time of the study the pipelines operated at a
design pressure of 70 barg (7 MPa) and the proposal was to increase the design pressure to 85
barg (8.5 MPa). This would involve stepping outside of current design rules by increasing the
maximum design factor from the current allowable of 0.72 to a value of 0.78.
The approach taken in the source document involves six basic elements:
Establishment of the limit states to be considered
Identification of failure modes that could lead to the limit states
Construction of limit state functions
Data analysis and the construction of appropriate probability density functions
Evaluation of failure probabilities
Assessment of the results.
The limit state considered is stated to be the ultimate limit state as related to a failure involving
a loss of containment and release of gas. This is associated with safety consequences [ie. risk];
leaks and ruptures are considered.
In discussing failure modes, the source document refers to hazards that could credibly lead to
the limit state. Hazards such as stress corrosion cracking, hydrogen-induced cracking, internal
corrosion and construction defects are dismissed on qualitative and semi-qualitative
deterministic arguments not given in the source document. The hazards of external interference
and external corrosion are considered to be the most significant.
Failure probabilities associated with the loss of containment are computed by direct integration,
performed numerically. This is where the volume under the joint probability density function of
the characterising variables, that lies within the failure region defined by the limit state (or
failure) functions, is determined.
Assessing results in the particular instance in the source document does not involve comparing
calculated failure probabilities against acceptance values in order to determine acceptability of a
104
combination of design and operating conditions. Rather, assessment is based on broad
consideration of the information generated, taking into account the sensitivity of the failure
probability to particular parameters, and making comparisons with other similar situations. The
approach adopted in the uprating study is to demonstrate that the change in the total calculated
failure probability due to the pressure increase is not significant in safety terms. From the
regulator's viewpoint it must be shown that the change in risk does not increase to an
unacceptable level.
It must be clearly stated that the purpose of the case study here is not to provide a detailed
critical appraisal of the methodology in the source document, nor to perform an intensive check
of the results obtained. The objective is to take a subset of the problem in the case study, along
with the associated failure functions and information on the uncertainty of the input variables,
and apply reliability methodology independently in a manner that it might be reasonably
expected to be done. The purpose is to illustrate the use of techniques applied to the particular
problem considered, whilst in the wider context bringing out issues highlighted in the guidelines
in the main body of this report regarding use and abuse of probabilistic methods.
2.2 FAILURE MODES AND FUNCTIONS
2.2.1 Failure Modes
The failure modes considered in this case study stem from damage due to external interference.
Such external interference may result in either a puncture of the pipe wall or in a dent and / or
gouge in the pipe wall. The metal loss, and associated stress concentration and intensification,
corresponding to a gouge may result in failure of the pipe wall ligament under internal pressure
loading.
The case study concentrates on the gouging and / or denting failure mechanism. Gouge / dent
defects are characterised by gouge depth and length, and dent depth (which may be zero in a
situation where a gouge occurs without a dent). Three consequences may arise from the
presence of a gouge / dent defect:
The gouge depth may be of a sufficient magnitude to grow rapidly to a through
thickness defect, whereupon a leak will occur.
If, in addition to this, the length of the gouge exceeds a critical value then rupture will
occur.
Neither leak nor rupture will occur if the gouge depth and length are less than their
respective critical values (although there may be fatigue implications, not considered
here, resulting from pressure cycling).
It is evident that to cover the domain of the problem two failure functions are required, relating
to:
Critical gouge depth
Critical gouge length.
These are dealt with in the following two subsections.
105
2.2.2 Critical Gouge Depth
Gouges situated in dents are assessed using a fracture mechanics approach that assumes that a
gouge behaves as a crack. The failure function is given by:
2
1
r
2
r r
S
2
sec ln
8
S K

)
`

|
.
|

\
|

=
where K
r
and S
r
are dimensionless material toughness and stress ratio parameters, respectively.
Their compositions are explained below. Before that, it is more convenient to rearrange this
function and note that the depth of a gouge exceeds a critical value if:
0
K
S
.
8
exp S
2
cos
2
r
2
r
2
r
<
(

|
.
|

\
|

K
r
and S
r
are given by:
( ) ( ) | |
IC
b b m m
r
K
a w a, Y w a, Y
K
+
=
|
.
|

\
|

|
.
|

\
|

=
w
a
1
Mw
a
1
S
f
m
r

In these equations
m
is a membrane stress given by:
|
.
|

\
|
=
2R
D
1.8 1
h m

b
is a bending stress, present due to dent of depth D, given by:
2w
D
10.2
h b
=
and
h
is the hoop stress in the pipe wall, given by:
w
PR
h
=
The gouge depth is denoted by a, and R and w denote the pipe radius and wall thickness,
respectively.
The quantities Y
m
and Y
b
are functions of gouge depth and pipe wall thickness, and are
normalised stress intensity factors for an edge-cracked strip in tension and bending,
respectively. They are given by:
106
4 3 2
m
w
a
30.4
w
a
21.7
w
a
10.6
w
a
0.23 1.12 Y
|
.
|

\
|
+
|
.
|

\
|

|
.
|

\
|
+
|
.
|

\
|
=
4 3 2
b
w
a
14.0
w
a
13.0
w
a
7.3
w
a
1.39 1.12 Y
|
.
|

\
|
+
|
.
|

\
|

|
.
|

\
|
+
|
.
|

\
|
=
The Folias factor M is given by:
2
1
2
Rw
L
0.26 1 M
(

+ =
where L is the gouge length. The material flow stress
f
is expressed in terms of the material
yield and ultimate strengths of
y
and
u
as:
( )
u y f
+ =
where is a flow stress parameter.
The material fracture toughness, K
IC
, is found from the Charpy energy using the following
correlation:
2b
1
v0
v
2
1
v0
IC
C
C
A
EC
K
|
|
.
|

\
|
|
.
|

\
|
=
where E is Young's modulus of the material, A is the area of the Charpy test specimen, b is a
dimensionless parameter, and C
v0
is a reference Charpy energy.
2.2.3 Critical Gouge Length
If the depth of a gouge exceeds a critical value, then a through-wall defect results. If the length
of such a through-wall defect exceeds a critical value, then rupture will occur. Such a situation
obtains if:
0 L
0.4
Rw
. 1
1.150
2
y
h
<
(
(

|
|
.
|

\
|


That is, the terms within the square-root sign can be interpreted as representing a critical length
L
c
.
It is worthwhile noting that, whilst the failure function for gouge depth incorporates both plastic
collapse and fracture (i.e. incorporates both flow stress and fracture toughness), the failure
function for gouge length appears to only involve collapse. The source document does not
provide an explanation of why this is the case. Analysis in this case study has proceeded using
the failure functions as given.
107
2.3 INPUT PARAMETERS AND DISTRIBUTIONS
2.3.1 List of Parameters and Distribution Specification
The parameters to be used as inputs to the failure functions given above are summarised in
Table 2.1. All variables have been assumed to be independently distributed, ie. uncorrelated.
For the most part, the variables are treated as either deterministic, or described using standard
distributions (normal, lognormal or Weibull). The two notable exceptions to this are the gouge
length L and dent depth D.
The gouge length is described by an offset logistic distribution; this is not commonly used and
may not be available as a standard option in reliability analysis software. Its presence offers the
opportunity to illustrate, within this case study, how this situation is dealt with.
The dent depth has to be accommodated by a bespoke distribution that allows for the fact that
gouges can be observed without the presence of dents. This has to be treated as a special case,
along with gouge length, and this is done in the following subsection.
2.3.2 Special Cases
Modelling non-standard distributions may be achieved in a standard manner via a
transformation and involving a standard normal distribution in the following way.
For the cumulative form of the distribution concerned consider the following identity:
F(x) = (u)
where x is the variable concerned (gouge length or dent depth). The inverse of this is found:
x = F
-1
((u))
and u is standard normally distributed variable: ie with mean 0.0 and standard deviation 1.0.
This has the effect of forcing the variable x to adopt the required probability density function.
(a) Gouge Length Offset Logistic Distribution
The cumulative distribution function for the offset logistic probability density function is
written as:
( )
L
x exp 1
1
X
L L

+
=
The inverse of this is:
L
1
L
L
X
X 1
ln
1
x

)
`

|
.
|

\
|

=
To test this algebraic manoeuvre, it is instructive to use a random number generator to
manufacture values of x by selecting from the cumulative standard normal distribution for X,
and plotting the resulting histogram of x against the required probability density function.
108
The results of this exercise, for 100000 values of X, are given in Figure 2.1, where x
L
,
L
and
L

are the values given in Table 2.1. The equality between the probability density function and the
transformation given above is confirmed.
(b) Dent Depth Bespoke Distribution
A bespoke distribution, given in the source document, is necessary to cater for the fact that in
external interference incidents not all cases result in a dent. The figures quoted in the source
document are that in 82% of cases plain gouges occur (ie. gouges with zero dent depth) and in
18% of cases gouges occur with a dent. This is expressed as a probability density function of
dent depth D as follows:
p(D) = 0.18 p
cond
(D) + 0.82 (D)
where p
cond
(D) is the probability density function given that an incident has occurred,
(D) is the Dirac delta function.
The conditional probability density function is taken as a Weibull distribution with location and
scale parameters (
D
and
D
, respectively) of values appropriate to the magnitude of internal
pressure (see Table 2.1).
The Dirac delta function has the following properties:
( ) 0 x if , 0 x =
( )


= 1 dx x
It is seen that the use of this function facilitates a finite probability of 0.82 that zero dent depths
occur.
Adopting the previous notation regarding the transformation of coordinates, the cumulative
distribution function is written as:
(
(

|
|
.
|

\
|

+ =

D
D
x
exp 1 0.18 0.82 X
It is clear that with x = 0 (zero dent depth) the cumulative distribution yields a value of 0.82, as
required. This lends a discontinuous nature to the inverse function, which is written as:
x = 0, for X 0.82
82 . 0 X for ,
X 1
0.18
ln x
D
1
D
>
)
`

|
.
|

\
|


In a similar manner to the distribution considered in (a), above, the continuous probability
distribution function was compared with the PDF generated from the transformation given
109
above, using a cumulative standard normal distribution for X. The results are given in Figure
2.2, for
D
= 0.9 and
D
= 4.49, where it is seen that good agreement is obtained.
It is noted that, as an alternative to this, the problem could be solved by first calculating the
conditional probabilities of failure for zero and non-zero dent depths. The required probability
would be the sum of 0.82 times probability of failure with D = 0 plus 0.18 times the probability
of failure with D g 0. This would, of course, double the number of analyses to be carried out for
each case considered.
(c) Summary of Special Cases
To summarise the handling of the two non-standard probability density functions, it may be
stated that two new functions for gouge length L and dent depth D are introduced as follows:
L
1
L
L
(N)
(N) 1
ln
1
L

(
(

)
`

|
|
.
|

\
|

=
with the condition that L = 0 if N <1.
( ) 82 . 0 D if , 0 D
D
=
0.82 ) (D if ,
) (D 1
0.18
ln D
D

1
D
D
D
>
)
`

|
|
.
|

\
|

=
As indicated in Table 2.1,
L
,
L
,
L
,
D
and
D
are assigned deterministic values, whereas N
and D
D
are standard normal variables. The function is the cumulative distribution.
2.4 ANALYSES
2.4.1 Software Used
The software used is SYSREL, which is marketed by RCP GmbH who are based in Munich [2].
SYSREL is part of a general suite of programs named STRUREL, which is general purpose
software that covers the preparatory steps, all computational tasks and post-processing options
in technical reliability, decision making under uncertainty, and statistical analysis. The suite of
programs has general application in the fields of engineering, operations research, financial
planning and statistics.
System reliability evaluation with multiple failure criteria is covered by SYSREL. System
modelling includes parallel systems in series, along with conditional events (observations). The
time invariant and time variant componential reliability analysis can deal with arbitrary
dependent structures in the Stochastic model. A large number of stochastic models for basic
variables is provided as standard inputs through specification of statistical parameters.
The main steps in conducting reliability analysis using SYSREL are as follows:
1. Specify failure functions
Define the various failure functions in the form f(x) such that, in deterministic terms:
f(x) < 0
110
indicates failure.
2. Define stochastic model
Specify the distribution types and parameters (mean, standard deviation and so forth)
for each of the basic variables (those assumed to be stochastically characterised).
Specify values for other variables that are assumed to be deterministically
characterised.
3. Define correlations
If there are any statistical correlations between any of the basic variables, these have to
be defined, and the magnitude of correlation coefficients provided.
4. Define parameter studies
This facility allows the effects on calculated reliability of changes in values of the
deterministic parameters to be examined. Lower bounds, upper bounds and increments
between these are set and corresponding values of reliability computed for each
increment.
5. Define logical model
This deals with the system aspects of the reliability calculations. The logical model is
formed from the logical connection (into series / parallel system(s)) of the components
of the system which are represented by their individual failure criteria. The
intersections and unions of failure events are specified.
6. Variables in failure criteria
This facility allows basic variables in the failure functions to be activated or de-
activated. By default, all basic variables are activated. When de-activated, they are
treated as deterministic and their mean values used in the reliability calculations.
7. Computation options
The computation options allow command to be exercised over the algorithmic control
parameters and all "flags" controlling the output. Principal among the computation
options are:
Method of probability calculation whether FORM, SORM, or "crude" FORM is
used.
Ditlevsen bounds on final union switches on or off evaluation of the bounds of
calculation of unions of intersections.
Convergence criteria for -point search
Computation of sensitivities specifies whether the sensitivities of the computed
reliability to the constants and / or stochastic model distribution parameters are to
be calculated.
8. Solution strategies
As the name suggests, solution strategies allow changes to be made to the manner in
which the -point search is made.
111
2.4.2 Failure Surface
In any reliability analysis a full understanding of the failure surface is essential in order that the
correct logical model is analysed by the software. To this end it is instructive to first define
three dimensionless parameters that define the failure surface as: K
r
, S
r
and L
r
. There are the
brittle fracture, plastic collapse and critical length parameters, respectively (where L
r
= L / L
c
,
see Subsection 2.2.3)
Table 2.2 sets out the variables that these parameters are functions of, categorised according to
whether they are load, geometrical, material or defect variables, ie.
F = F[{load}; {geometrical}; {material}; {defect}]
It is noted that the length and dent defect variables, L and D are functions of further variables as
discussed in Subsection 2.3.2, but for the purposes of this discussion it is important to focus on
L and D rather than their "generating" variables.
To facilitate a full understanding of the failure surface, the parameters S
r
, K
r
and L
r
are
presumed to be the x, y and z-axes in a three-dimensional space and only positive values of each
of these are of any relevance. The failure surface in this space may be thought of as a uniform
cylinder oriented such that its generators are parallel to the L
r
axis, with its cross-section defined
by the failure relationship between K
r
and S
r
(see Subsection 2.2.2). The ends of the cylinder
are closed by the planes L
r
= 0 and L
r
= 1.0. Any combinations of {S
r
, K
r
, L
r
} within this
cylinder are "safe" and any outwith it correspond to failure. Given this geometry, it is
appropriate to examine response by viewing the failure surface along the L
r
axis.
Figures 2.4, 2.5 and 2.6 do this for values of dent depth of 0, 5 and 10mm. In all cases, the
geometrical and material variables have been set to their respective mean or deterministic
values. The purpose of the graphs is to illustrate the response to varying pressure, P, and defect
dimensions D, L and a. Each figure is similar insofar that three values of pressure have been
taken, and gouge depth a has been varied for a series of fixed values of gouge length L that lie
below and above the critical length L
c
(including a length of zero). Values of a have been taken
so as to "pierce" the failure surface when viewed along the L
r
axis: the increments are 0.5mm in
each case, and the maximum value of a is indicated on the K
r
axis of each figure.
What results, generically speaking, is a set of nonlinear rays emanating from a generating point
on the S
r
axis. Points on the rays correspond to different values of a for fixed L, the highest
values of a being furthest from the generating point. Increasing the value of L has the effect of
increasing the slope of the ray with respect to the K
r
axis (the ray corresponding L = 0 being
parallel to it). Thus in general there will be a ray that corresponds to the critical length.
The principal effects of increasing the pressure are to:
Increase the value of S
r
for the generating point
Reduce the slope of rays with respect to the K
r
axis
Allow the failure surface to be pierced at smaller values of a.
The effects of increasing D are to:
Reduce the value of S
r
for the generating point
112
Reduce the slope of the rays with respect to the K
r
axis
Allow the failure surface to be pierced at smaller values of a.
The generating point is, in fact, the plastic collapse parameter for a zero gouge defect and is
given by:
( )
u y
r
2R
D
1.8 1
w
PR
P
+
|
.
|

\
|

=
From the above discussion, a generic failure diagram is derived and is shown schematically in
Figure 2.7. Through-thickness defects lie outside of the S
r
K
r
failure diagram, and the
particular rays corresponding with L
r
= 0 and L
r
= 1 subdivide the failure region into leak and
rupture subregions.
Figure 2.3 shows plots of the failure function:
( )
(

|
.
|

\
|
=
2
r
2
r
2
r r r
K
S
8
exp S
2
cos K , S G
corresponding to Figure 2.4 (ie. a dent depth of zero and pressure of 7 MPa). This confirms that
the failure region relates to the situation in which G becomes negative.
To bring this discussion to a conclusion, the logical model must be defined such that:
1. Leak occurs if:
G 0 and L
r
1 0
2. Rupture occurs if:
G 0 and 1 - L
r
0.
2.4.3 Analyses Performed
(a) Baseline analyses at pressures of 7.0 and 9.0 MPa
Initially, two baseline analyses are performed for internal pressures of 7.0 and 9.0 MPa (these
bracket the values taken in the source document of 1.06 times 70 barg and 85 barg, which are
equivalent to 7.42 MPa and 9.01 MPa, respectively). Despite the difference between the
pressure values used here and those in the source document, the values of
D
and
D

corresponding to the 70 and 85 barg cases (see Table 2.1) are used. For reasons that become
apparent in the discussion of the results in Subsection 2.5.1, below, the value of
D
and
D
are
irrelevant. For each pressure, two SYSREL analyses are carried out: for leak and rupture (see
Subsection 2.4.2, above). Second order reliability methods (SORM) are used in each case
(because this method is usually more accurate than first-order linear methods).
(b) Pressure variation
The effects on calculated failure probabilities are also investigated for a continuous variation of
pressure between 6.5 and 11.0 MPa. Two sets of analyses are performed using SYSREL: for
leak and rupture. Both first- and second-order reliability methods are used (FORM and SORM,
113
respectively). However, as discussed in Section 3 of this annex, some of the results obtained
using SORM have been subsequently shown to be erroneous.
(c) Changes in characteristics of probability density function for gouge length
The effects of changes in the characteristics of the probability density function for gouge length
are investigated in the following way.
It will be remembered that the probability density function for the gouge length is taken as an
offset logistic distribution, and is handled in the present analyses in an indirect manner as set out
in Subsection 2.3.2(a), above. This is referred to as the "baseline" distribution.
Perturbations of this baseline distribution are produced by first generating a histogram of values
at 10mm intervals in the manner described in 2.3.2(a), and with the results summarised in
Figure 2.1. Two differing Weibull distributions are then fitted to this discrete data over the
following gouge length ranges:
0 to 400mm
10 to 400mm.
The fitting may be done via linear regression of Weibull plots. A Weibull plot is obtained from
taking the following for the abscissae and ordinates:
x = In (L)
y = In (-In (1 OL (L)))
where L is the gouge length
OL(L) is the cumulative distribution for the gouge length.
Data that follow a Weibull distribution should appear as a straight line when plotted in this way.
The data from the histogram are plotted as a continuous line on a Weibull plot in Figure 2.8.
This is evidently not a straight line; however an abscissae of 6 on this scale corresponds to a
gouge length of 403.4mm, whereas a value of 8 corresponds to 2481mm. Baseline SYSREL
analysis at internal pressures of 7 and 9 MPa are found to give gouge lengths within 400mm,
and so it was judged that Weibull fits up to 400mm are valid ways to perturb the baseline
distribution.
To this end, Figure 2.9 shows (in the upper part) Weibull plots of the linear fits to the data in
Figure 2.8 between 0 and 400mm, and 10 and 400mm gouge length. The lower part of the
figure shows the cumulative distributions for the offset logistic and the two fitted Weibull
distributions. There is little discernable difference between the three distributions. However
Table 2.3, which shows the statistical parameters associated with each of the three distributions
taken, illustrates the differences between them in terms of mean and standard deviation.
For each of the two fitted Weibull distributions, failure probabilities corresponding to leak and
rupture are calculated for a continuous variation of pressure between 6.5 and 11.0 MPa. In the
case of Weibull distribution #1 (see Table 2.3), both FORM and SORM are used. For Weibull
distribution #2, only SORM analyses are performed.

114
(d) Changes in characteristics of probability density function of gouge depth
The effects of changes in the characteristics of the probability density function for gouge depth
are investigated in a similar manner to gouge length described above.
First a discrete set of data is generated from the continuous baseline Weibull distribution to
form a histogram of values at 0.5mm intervals. A differing Weibull distribution is then formed
by fitting a straight line to the first 11.75mm of gouge depth on a Weibull plot. This is Weibull
#3 in Table 2.4, and the reasoning behind this choice is to limit gouge depth to be less than the
wall thickness (12mm) of the pipe. Table 2.4 also shows the statistical parameters of a further
Weibull distribution #4. The values of and for this are obtained by adding to the baseline
values, the differences between Weibull #3 and the baseline values of and .
As above, for each of the Weibull distributions for gouge depth, failure probabilities
corresponding to leak and rupture are calculated for a continuous variation of internal pressure
between 6.5 and 11.0 MPa. In all cases SORM analyses are performed.
(e) Effects of model uncertainty
As a final investigation of perturbation of the baseline system, the effects of model uncertainty
are introduced. This is achieved by adding a basic variable X
m
into the failure function G,
defined in Subsection 2.4.2, in the following way:
( )
(


|
|
.
|

\
|
=
2
r
2
r
2
m
r
r r
K
S
8
exp
X
S
2
cos K , S G
The model uncertainty X
m
is assigned a normal distribution with a mean and standard deviation
that reflects the degree of accuracy to which the limit state function actually predicts failure.
The values taken for the means and standard deviations of the model uncertainty are
summarised in Table 2.5.
As above, for each of the normal distributions for model uncertainty, SORM reliability analyses
are performed for leak and rupture over a range of internal pressures between 6.5 and 11.0 MPa.
As can be seen, the model uncertainty has been applied principally to the through-thickness
failure function. There will be an implicit model uncertainty associated with the gouge length
failure function, which is not investigated here. Moreover, in the light of the discussion given in
Subsection 2.4.3(e), above, there may be further uncertainty (the effects of which cannot be
quantified without a modified or alternative failure function) associated with the apparent
absence of fracture modelling in the gouge length failure function.
2.5 RESULTS OBTAINED
2.5.1 Baseline Analyses at Pressures of 7.0 and 9.0 MPa
(a) Table 2.6
The first set of results from the baseline analyses to be discussed is given in Table 2.6. This
gives:
The failure probabilities
Beta point values
115
Constraint gradients
for both pressures and for the failure domains of leak and rupture.
With regard to the failure probabilities it is seen that for a pressure of 7 MPa, the leak failure
probability is about 3 times the rupture failure probability. At a pressure of 9 MPa, however,
the rupture failure probability exceeds the leak failure probability by a factor of 2. The sum of
the two probabilities at 7 and 9 MPa is 0.010979 and 0.012699, respectively. (A calculation of
the probabilities associated with the union of leak and rupture events at each pressure gives
values of 0.009171 and 0.01068 for pressures of 7 and 9 MPa, respectively.) Thus, an increase
in pressure leads to an increased probability that a through-thickness defect will occur.
The beta-point values are given in Table 2.6, where the variables D
D
and N (see Table 2.1)
have been transformed to their respective values of dent depth (D) and gouge length (L),
respectively. The first point to note it that the dent depth does not contribute to the failure
probabilities at either pressure, for leak or rupture situations; ie. dent-less gouges dominate.
To facilitate understanding of the significance of the beta-points, failure surface plots for
pressures of 7 and 9 MPa are given in Figures 2.10 and 2.11, respectively. In each case separate
plots for leak and rupture are given, and gouge depth rays are provided for the critical length
and the length corresponding to the beta-point (if different). Points on the rays are shown in
0.5mm increments and rays are carried beyond the failure surface for illustration purposes.
Large circles mark the beta-points in each case, and the values of gouge depth and length are
indicated on each figure. In all cases, as expected, the beta point is located precisely on the
failure surface.
Considering the pressure of 7 MPa first, (Figure 2.10), it is seen that, in the case of a leak, the
gouge length of the beta point (114.7mm) is less than that of the critical ray (215.5mm). The
beta point therefore, as expected, lies to the left of the ray associated with the critical length.
In the case of rupture, however, the gouge length of the beta point coincides with the critical
length (215.3mm compared with 215.5mm, respectively). The reasons for this are that in
computing the maximum probability density associated with failure, the shortest ray is sought in
the standard normal space (see the Review of Theory and Practice, Subsection 5.8.2) by the
reliability calculations. It is apparent that the shorter rays tend to lie in an orientation more
parallel to the K
r
axis (this is evidenced by the fact that the leak probability of failure exceeds
the rupture probability of failure). In the rupture situation the shortest ray that lies just within
the domain where L
r
1, is the critical ray itself.
Turning to the pressure of 9 MPa (Figure 2.11) it is seen that the converse to all of the above is
true:
The higher probabilities of failure correspond to rays that are disposed closer to the S
r

axis (rupture failure probability exceeds leak failure probability)
The beta point for leak coincides with the critical length (149.4mm)
In the case of rupture, the gouge length of the beta point exceeds the critical value
(300.0mm versus 149.4mm).
In conclusion to this set of results, the lower pressure is associated with failure with deeper and
shorter gouges, whereas the higher pressure is associated with shallower and longer gouges.
116
Constraint gradients are also provided in Table 2.6; constraint 1 is that the gouge depth is >
critical, whereas constraints 3 and 4 are that the gouge length is greater and less than critical
respectively (thus 1 and 4 relate to leak, and 1 and 3 relate to rupture). Constraint gradients are
the direction cosines of the variables (statistically defined) of the vector from the origin of U-
space to the beta-point. These are a measure (in each constraint) of the sensitivity or
participation of each of the variables to the beta-point. Thus, if a constraint gradient is zero then
the vector is at right angles to the axis corresponding to the variable concerned, and that variable
does not participate. On the other hand, if the gradient is unity, then the vector and axis
coincide, and the variable concerned fully participates.
With reference to Table 2.2, all of the 7 constraint gradient variables appear in the formulae for
either K
r
or S
r
and by virtue of this in constraint 1. In this case of constraint 3 or 4 (which only
involves L and the formula for L
r
), only w,
y
and N appear.
Returning to Table 2.6, the lack of participation of dent depth is re-emphasised from the
constraint gradients. It is seen that participations stem mainly from:
Gouge depth a
Wall thickness w
N as related to gouge length L
Charpy energy C
v
.
Little should be read into the relative values of the constraint gradients between the 1 and 4, or 1
and 3, constraints, as their relative importances are affected by the sensitivities of the constraints
as a whole, discussed below. The fact that in constraints 4 and 3 N (and hence L) is the
dominating variable does not mean that it dominates to that extent with respect to the beta-point
as a whole. It is interesting to note in constraint 1 that in moving from a pressure of 7 MPa to
9 MPa, the constraint gradient for a reduces in value, whereas that for N (and hence L) increases
in value. Moreover, the similarity in values between rupture at 7 MPa and leak at 9 MPa should
be noted. This, and the changes in a and L tend to complement the description of the failure
surface plots in Figures 2.10 and 2.11, given above.
(b) Table 2.7
The second set of results from the baseline analyses to be discussed is given in Table 2.7. This
gives:
Constraint sensitivities
Sensitivities and elasticities of means and standard deviations of basic variable, for
both pressures and for the failure domains of leak and rupture.
The constraint sensitivities (often referred to as values) are measures of the importance of
constraints. They are normalised so as to have a sum of unity. As can be seen, constraint 1 is
predominant across pressures, and leak and rupture failure modes. Also, the similarity in
disposition of values between constraints 1 and 3, or 1 and 4, between leak at 7 MPa and rupture
at 9 MPa, as well as between rupture at 7 MPa and leak at 9 MPa should be noted.
The various sensitivities and elasticities for the variables are given in Table 2.7. Values are
given for both the means and standard deviations of the basic variables. Essentially they
117
measure the rate of change of the reliability index (and therefore also of the failure probability)
with respect to the mean or standard deviation of the variable concerned. They represent,
therefore, different measures of the sensitivity of the probability of failure to the statistical
characteristics (mean and standard deviation) of the basic variables. Elasticities for the standard
deviation of a variable are almost always negative because an increase in standard deviation
usually decreases the reliability index (increases the probability of failure).
To aid appreciation of this multitude of sensitivity norms, they are plotted on a variable-by-
variable basis for the two pressures, in Figure 2.12. Based on the heights (whether positive or
negative) and the "density" of the bars, the measures of relative importance of the statistical
parameters of the various variables given in Table 2.8 are derived. The measures are largely the
same for both pressures.
With the exception of D
D
, which can quantitatively be judged to have no importance, the
importance measures have been qualitatively assigned as "high", "medium" or "low". Those
designated medium or high are judged to be candidates for a "robustness" study. In the case of
N, this must be interpreted as changes to the statistical parameters defining the offset logistic
distribution for gouge length L. Clearly, the high importances for the mean of wall thickness
(w) and standard deviation of gouge depth (a) means they are potentially a priority.
(c) Table 2.9
The third set of results from baseline analyses to be discussed is given in Table 2.9. This gives:
Sensitivities and
Elasticities
for the deterministic variables for both pressures and for the failure domains of leak and rupture.
The variable is set to a half and is used in the definition of flow stress (see Subsection 2.2.2,
above). For better appreciation of their significance, they are plotted in Figure 2.13. The plots
in this figure are separated into sensitivity and elasticity results.
Adopting the same policy as for the basic variables given in Subsection 2.5.1(b), above, the
deterministic variable has been assigned the importances summarised in Table 2.10. It is seen
that the variables
D
and
D
(which are related to the dent depth) quantitatively have zero effect,
and confirm the thread emerging from these baseline analyses that dent depth makes no
contribution.
As before, those classified as medium or high importance are judged to be candidates for a
robustness study. By far and away (on the evidence of Figure 2.13) the variables
L
,
L
and
L

are the most significant. They define the offset logistic distribution used for gouge length; the
basic variable N (also associated with L) is judged as of medium importance in Subsection
2.5.1(b), above. Taken together, these emphasise the overall importance of the gouge length
defect to the computation of the failure probabilities.
2.5.2 Pressure variation 6.5 to 11.0 MPa
The results from the SYSREL FORM analyses of pressure variation are shown in Figure 2.14.
This shows plots of the probability of failure (to log scale), corresponding with leak and rupture.
Three very important features are evident in these plots:
118
The switchover of the failure probabilities: at low pressures the higher probability is
associated with leak and the lower with rupture; the converse is true at high pressures.
The steps in the graphs where the leak and rupture probabilities jump down and up,
respectively.
The fact that the upper probability curves (ie the probability of a through-thickness
defect) can be viewed as a single curve.
The switchover has been discussed in the previous subsection and relates to the closeness of the
constraint surfaces (and their intersections) to the origin in standard variable space.
Jumps may occur in the probability curves obtained from parameter studies of this sort, and are
caused by the activation or de-activation of constraints [2]. To take, for example, the case of a
leak, at low pressures the associated failure probability corresponds only with criterion 1 (gouge
depth critical, see Figure 2.10). As pressure is increased to the "step" value, criterion 4
(gouge length critical) is activated. At pressures in excess of this, both criteria 1 and 4 remain
activated (see Figure 2.11) and the beta point continues to be associated with the critical gouge
length. The converse of this is true in the case of rupture, where at low pressures both criteria 1
and 3 (gouge length critical) are activated, and the beta-point is associated with the critical
length (see Figure 2.10). As pressure increases, criterion 3 becomes de-activated (see Figure
2.11).
The results from the SYSREL SORM analyses of pressure variation are shown in Figure 2.15.
This shows plots of the probability of failure (to log scale) corresponding with leak and rupture.
The first feature to note is the spikes in the probability versus pressure curves. Values in this
spike have been obtained by taking very closely spaced values of pressure within the range 8.2
to 8.4 MPa. The general upward trends in the curves are judged to be sound, but the precise
values at the peaks are found to be very sensitive to the value of pressure concerned and the
SYSREL algorithm control parameters. Nevertheless, values of failure probabilities at or below
around 0.1 are considered to be robust second-order values. The tops of the peaks have not
been determined accurately or stably.
In Figure 2.16 both the FORM and SORM analyses results are plotted together for comparison
purposes. The following points can be noted from this comparison.
There is little difference between the lower failure probabilities for the FORM and SORM
analyses; that is to say, rupture at the lower pressures and leak at the higher pressures.
It will be noticed on Figures 2.15 and 2.16 that around the step value of between 8.2 and 8.4
MPa there are a number of apparently spurious failure probabilities. In Figure 2.15, they appear
between the upper ('spiked') and lower probability curves. In Figure 2.16, it is seen that, despite
being calculated via SORM, they lie on the curve corresponding with FORM. The reason for
this is that when SYSREL encounters an inadmissible value of the curvature correction factor
that converts a FORM result to a SORM, it reverts to the FORM value in the SORM analysis.
The spike in the probability is evidently causing this correction factor to become inadmissible.
For the higher probabilities (ie leak and rupture at the lower and higher pressures, respectively),
the differences between results from the FORM and SORM analyses are significant.
119
Firstly, there are the spikes themselves, which are not picked up by the FORM analyses. These
appear to occur at the same value of pressure (between about 8.2 and 8.4 MPa) as the steps in
the curves where switchover between failure modes occurs. To illustrate this "coming together"
of the leak and rupture curves at their respective spikes, the values of the basic variables in
standard space of various beta-points are given in Table 2.11. Two of these sets of values
correspond to the beta-points in Table 2.6 (leak at 7 MPa and rupture at 9 MPa); values from
two further internal pressures are also tabulated: leak and rupture at 8.27 MPa and 8.35 MPa,
respectively (values just outwith the singular value of pressure in Figure 2.15). The lengths of
the vectors joining the origin to each beta-point are also shown in Table 2.11. Strictly, the
lengths of such vectors for different pressures should not be compared, but for the pressures of
8.27 MPa (leak) and 8.25 MPa (rupture) which are similar in value, the beta-point lengths are
very close in magnitude, indicating a similar failure probability level.
Secondly, with further reference to Figure 2.16, it is seen that, outwith the region of pressure
containing the spikes other numerical differences between the failure probabilities computed
using FORM and SORM occur. At low pressures FORM underestimates the failure probability
corresponding to leak compared with SORM. At the higher pressures, for rupture the converse
is true.
Given these differences between the FORM and SORM results, it is judged necessary to
reconsider the use of some of the default analysis options used in SYSREL. The results from
two further sets of pressure variation analyses are given in Figures 2.17 and 2.18. These
compare, for FORM and SORM analyses respectively, the results obtained from the analyses
reported above (referred to in the graphs as "inactive constraints off"), with analyses obtained
from SYSREL with inactive constraints switched on.
When a number of constraints are specified in a reliability analysis (as in this case, where two
are used in each of leak and rupture) and a calculated beta-point only involves one of the
constraints, SYSREL calculates the failure probability based on the active constraint only and
ignores the contribution the inactive constraint makes to the failure probability. With reference
to Figures 2.10 and 2.11 (along with Table 2.7) it will be remembered that at low pressures (7
MPa) in the leak failure probability, the critical length constraint (designated as number 4) is
inactive. At high pressure (9 MPa) in the rupture failure probability, the critical length
constraint (designated as number 3) is inactive.
Turning to Figure 2.17, where FORM results are compared, it is seen that, for the sets of results
that involve an intersection (rupture at low pressure and leak at high pressure), the probabilities
from analyses with inactive constraints turned off or on are identical. For the sets of results that
involve an inactive constraint (the ones described above) it is seen that significant differences
between calculated probabilities can occur. The main features are as follows:
the "steps" in the curve are much less pronounced when the inactive constraint is
switched on
as a consequence of this, the calculated failure probabilities are less in and around the
step region
at pressures remote from the step region, the trend is for the failure probabilities
calculated with the inactive constraint off and on to equalise.
120
Considering Figure 2.18, where FORM results are compared, it is seen that again for rupture at
low pressure and leak at high pressure, the probabilities calculated from analyses with inactive
constraints turned off or on are identical.
For leak at low pressure and rupture at high pressure, it is seen that the failure probabilities from
the analysis with inactive constraints turned off or on display identical features: the acute spike,
and a more significant step than in the FORM analyses. The calculated probabilities from the
analyses involving the inactive constraint switched on are less than those when it is switched
off, but the differences are much less marked then in the FORM analyses.
In many other cases of reliability analyses, the differences between failure probabilities
determined from FORM and SORM are small. Such differences that do occur are due to the
curvatures of the failure surfaces local to the beta-points. Significant differences may occur if
the radii of curvature of the surfaces at the beta-point are small indicating sharply curved
surfaces as opposed to shallow ones. Similarly, the difference in calculated failure probabilities
from switching inactive constraints off or on would normally be expected to be within the
general uncertainties associated with the problem.
Given that, in performing this pressure variation analysis, the main features of the probability of
failure versus internal pressure curves are preserved whether inactive constraints are switched
off or on, the remaining analyses described in Subsections 2.5.3 to 2.5.5 have been carried out
with the inactive constraint switched off.
2.5.3 Changes in Characteristics of Probability Density Function for Gouge
Length
The results from the SYSREL analyses of pressure variation are shown in Figure 2.19 and
Figure 2.20, with some numerical results given in Table 2.12.
Figure 2.19 shows plots of the failure probability (to log scale) corresponding to leak and
rupture for the baseline offset logistic distribution for gouge length (SORM), along with the
results from FORM and SORM calculations using the Weibull #1 distribution (see Table 2.3).
Regarding the SORM calculations, there appears to be very little difference between the
probabilities corresponding with the offset logistic and Weibull distributions (compare Figure
2.19 with Figures 2.14 and 2.16). This is confirmed numerically for pressures of 7 and 9 MPa
in Table 2.12 (values in braces in the table).
Turning to the SORM results in Figure 2.19, it is seen that in comparing the plots for the offset
logistic and Weibull #1 distributions:
The jump in the curves for leak and rupture probabilities occurs within the same
pressure range (8.2 8.4 MPa).
At low and high pressures outwith the jump in the higher probabilities (and across the
whole pressure range for the lower probabilities), the probabilities of leak and rupture
are very similar in value (this is confirmed by the numerical values given in Table
2.12).
The peak values in the spike have been attenuated in changing from the offset logistic
to the Weibull distribution; moreover, the curve of higher probabilities appears to be
smoother.
121
Figure 2.20 shows results from the original offset logistic distribution, as well as Weibull #1 and
Weibull #2 distributions (see Table 2.3). All are SORM calculations. Here it is seen that the
principal effects of the introduction of Weibull #2 are as follows:
The jump shifts to a lower pressure (between 8.0 and 8.2 MPa).
The spike is attenuated in appearance and values from the offset logistic distribution,
but is more "peaky" than the one corresponding to the Weibull #1 distribution.
At higher pressures outwith the jump in the upper probabilities, and across the whole
pressure range for the lower probabilities, the probabilities of leak or rupture are very
similar in value for all three distributions (this is confirmed by the numerical values
given in Table 2.12).
2.5.4 Changes in Characteristics of Probability Density Function for Gouge
Depth
The results corresponding to this case are summarised graphically in Figure 2.21, with some
numerical values given in Table 2.12. All are SORM results. With reference to Table 2.4,
Weibull #3 represents a decrease and Weibull #4 an increase in both the mean and standard
deviation with respect to the values for the baseline distribution.
It is seen that the jumps in the curves are retained with the two replacement distributions, as are
the discontinuous spikes. In the cases of Weibull #3 and Weibull #4, no attempt has been made
in the analyses to obtain values of probabilities at a finer subdivision of pressure within the
jump / spike regions, as is the case for the baseline distribution (see Subsection 2.5.2, above).
Hence, comparison between peak values in the spikes of each of the curves is not appropriate.
The locations of the jumps and spikes (with respect to pressure) depend on the characteristics of
the gouge length distribution. For Weibull #3, they occur at lower values of pressure than for
the baseline distribution, whereas for Weibull #4 they occur at higher values.
Further noticeable effects relate to the probability values. Across the pressure range considered,
the probability values for Weibull #3 tend to be lower than those for the baseline distribution,
whereas the converse is true for Weibull #4. The numerical values for pressures of 7 and 9 MPa
given in Table 2.12 confirm this finding, although the differences are not significantly large.
With reference to Table 2.4, however, the changes in standard deviation in Weibull #4 and #3
amount to only ! 4% of that of the baseline distribution.
2.5.5 Effects of Model Uncertainty
The results corresponding to this case are summarised graphically in Figure 2.22, with some
numerical values given in Table 2.12. With reference to Table 2.5, model uncertainties #1 and
#2 have similar coefficients of variation (standard deviation divided by mean), but means which
vary by about 10% of each other. The baseline model uncertainty has, of course, a mean of
unity and a standard deviation of zero. All the results in Figure 2.22 and Table 2.12 correspond
with SORM.
It is seen that the jumps in the curves are retained as model uncertainty is changed, as are the
discontinuous spikes in the SORM results. In the cases of model uncertainties #1 and #2, no
attempt has been made in the analyses to obtain values of probabilities at a finer subdivision of
pressure within the jump / spike regions as is done for the baseline case. Hence, care must be
taken in comparing peak values of failure probabilities between curves in the spike region.
122
The locations (with respect to pressure) of the jumps / spikes depend on the characteristics of
the distribution used for model uncertainty. For model uncertainty #1, the shift is significant in
value: from between 8.2 and 8.4 MPa to between 7.4 and 7.6 MPa. The change is less
pronounced for model uncertainty #2, suggesting that the change in mean may be the principal
mechanism for this shift.
It is judged that these shifts are also primarily responsible for any changes in values of failure
probabilities corresponding to leak and rupture. It is seen that the introduction of model
uncertainty #2 has very little effect on the lower probabilities over the whole range of pressure
considered. Outwith the jump / spike region it is seen that increases in failure probability for
the higher probabilities occur for pressures in excess of about 9 MPa.
Similar types of changes take place, but are more pronounced, for model uncertainty #1. This is
because of the larger shift in the jump / spike region.
123

Variable Description Units Type Value /
Parameters
R Pipe outside radius mm Deterministic 457.2
w Pipe wall thickness mm Normal distribution = 12.8,
= 0.3

Flow stress parameter - Deterministic 0.5

y
Pipe material yield strength MPa Lognormal
distribution
= 445.9,
= 12.8

u
Pipe material ultimate strength MPa Normal distribution = 593.4,
= 14.5
E Pipe material Young's modulus MPa Deterministic 207 x 10
3

C
v
Charpy energy mJ Lognormal
distribution
= 55200,
= 11100
C
v0
Reference Charpy energy mJ Deterministic 112300
A Charpy test specimen cross-section area mm
2
Deterministic 53.55
b Charpy energy correlation parameter - Deterministic 0.4950
a Gouge depth mm Weibull
distribution
= 0.73
= 0.98
L Gouge length mm Offset logistic
distribution

L
,
L
,
L

see below
D Dent depth mm Bespoke
distribution

D
,
D
, D
D

see below

L
- Deterministic 0.043

L
mm Deterministic 24.84

L
mm Deterministic 30.13
N
Parameters used in statistical
distribution for gouge length L (see
Subsection 2.3.2)
- Normal distribution = 0,
= 1.0

D
- Deterministic 0.9 0.9

D
mm Deterministic 4.91 4.49
D
D

Parameters used in statistical
distribution for dent depth D (see
Subsection 2.3.2)
- Normal distribution = 0,
= 1.0
P Operating pressure MPa Deterministic 7.0
(70 barg)
9.0
(90
barg)

Table 2.1 Parameters used as Inputs to Failure Functions (baseline analyses)

1
2
4


L
o
a
d

G
e
o
m
e
t
r
i
c
a
l

M
a
t
e
r
i
a
l

D
e
f
e
c
t

P
a
r
a
m
e
t
e
r

P
*


(
M
P
a
)

R
*


(
m
m
)

w


(
m
m
)

E
*


(
M
P
a
)


*


(
-
)


y


(
M
P
a
)


u


(
M
P
a
)

C
v


(
m
J
)

C
v
0
*


(
m
J
)

A
*


(
m
m
2
)

b
*


(
-
)

a


(
m
m
)

(
N
)

L

(
m
m
)

(
D
D
)

D

(
m
m
)

K
r

z

z

z

z


z


z

z

z

z

z


z

S
r

z

z

z


z

z

z





z

z

z

L
r

z

z

z



z







z


o
r

v
a
l
u
e

v
a
r
i
e
s

4
5
7
.
2

1
2
.
8

2
1
0
0
0
0

0
.
5

4
4
5
.
9

5
9
3
.
4

5
5
2
0
0

1
1
2
3
0
0

5
3
.
5
5

0
.
4
9
5

1
.
1
9
4

8
9
.
1

0
.
0

*


=


D
e
t
e
r
m
i
n
i
s
t
i
c

v
a
r
i
a
b
l
e


T
a
b
l
e

2
.
2



V
a
r
i
a
b
l
e

i
n

P
a
r
a
m
e
t
e
r

A
x
e
s

o
f

F
a
i
l
u
r
e

S
u
r
f
a
c
e


125
Fitted Weibull Distributions

Baseline
Offset
Logistic
Distribution
Weibull #1
0 400mm
Weibull #2
10 400mm
* - 0.813 0.692
*
(mm)
- 140.75 127.73
Mean
(mm)
241.78 157.71
(-35%)
x

163.51
(-32%)
x

Standard
Deviation
(mm)
1604.0 195.43
(-88%)
x

242.34
(-85%)
x

( )

|
|
.
|

\
|

|
|
.
|

\
|
=
1

L
.exp

L w *
x as a percentage change from baseline distribution


Table 2.3 Statistical Parameters Associated with Offset Logistic Distribution
for Gouge Length and Fitted Weibull Distributions
126

Weibull Distributions
Baseline
(see Table 2.1)
Weibull #3 (Fitted to
first 11.75mm of
baseline data)
Weibull #4

0.73 0.687 0.773

(mm)
0.98 0.826 1.134
Mean
(mm)
1.191 1.065
(-11%)*
1.318
(+11%)*
Standard Deviation
(mm)
1.655 1.592
(-4%)*
1.724
(+4%)*
* percentage change from baseline distribution

Table 2.4 Statistical Parameters Associated with Distributions for Gouge Depth



Normal Distributions
Baseline Model Uncertainty
#1
Model Uncertainty
#2
Mean 1.0 0.92 1.0
Standard Deviation 0.0 0.092 0.1

Table 2.5 Statistical Parameters Associated with Normal Distribution
for Model Uncertainty







127

Table 2.6 Baseline Analyses at 7 and 9 MPa Internal Pressures. Leak and Rupture
Failure Probabilities, Beta-Point Values and Constraint Gradients
Pf
w
a
s
y
s
u
C
v
D
D
(D)
N
(L)
1 4 1 3 1 4 1 3
w 0.039 -0.035 0.043 0.034 0.045 -0.041 0.060 0.038
a -0.994 0 -0.979 0 -0.972 0 -0.907 0
s
y
0.002 -0.031 0.004 0.030 0.006 -0.038 0.016 0.036
s
u
0.002 0 0.005 0 0.007 0 0.018 0
C
v
0.068 0 0.047 0 0.058 0 0.005 0
D
D
0 0 0 0 0 0 0 0
N -0.082 0.999 -0.193 -0.999 -0.224 0.998 -0.417 -0.999
Constraints: 1 Gouge depth > critical
3 Gouge length > critical
4 Gouge length < critical
B
E
T
A

P
O
I
N
T

V
A
L
U
E
S
LEAK RUPTURE
445.5
593.2
9.050
445.7
593.3
0
0
0.008743
C
O
N
S
T
R
A
I
N
T

G
R
A
D
I
E
N
T
S
0.003956
52880
0
0
0.7133
215.3
12.77
52310
7.0 MPa 9.0 MPa
12.77
8.598
12.76
6.671
LEAK RUPTURE
0.008414 0.002565
0.2054
114.7
12.77
7.89
445.6
593.2
52640
0
0
0.4207
149.4
0
0.9703
300.0
445.2
592.8
53980
0
1
2
8

T
a
b
l
e

2
.
7



B
a
s
e
l
i
n
e

A
n
a
l
y
s
e
s

a
t

7

a
n
d

9

M
P
a

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
s
.


S
e
n
s
i
t
i
v
i
t
i
e
s

o
f

C
o
n
s
t
r
a
i
n
t
s

a
n
d

S
e
n
s
i
t
i
v
i
t
i
e
s

a
n
d

E
l
a
s
t
i
c
i
t
i
e
s

o
f

M
e
a
n
s

a
n
d

S
t
a
n
d
a
r
d

D
e
v
i
a
t
i
o
n
s
S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y
S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y
S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y
S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y
w
0
.
1
2
8
9
0
.
6
9
0
4
0
.
1
4
9
7
0
.
6
8
4
7
0
.
1
4
5
8
0
.
7
0
2
5
0
.
2
1
.
0
7
7
a
0
.
2
2
0
8
0
.
1
1
0
3
0
.
1
6
2
2
0
.
0
6
9
1
8
0
.
0
8
2
8
2
0
.
0
3
7
2
3
-
0
.
0
6
3
1
1
-
0
.
0
3
1
7
1
s
y
0
.
0
0
0
1
3
5
2
0
.
0
2
5
2
1
0
.
0
0
0
5
5
5
5
0
.
0
8
8
5
1
0
.
0
0
0
3
1
0
5
0
.
0
5
2
1
4
0
.
0
0
1
2
4
8
0
.
2
3
4
1
s
u
0
.
0
0
0
1
3
5
1
0
.
0
3
3
5
4
0
.
0
0
0
3
4
0
8
0
.
0
7
2
2
5
0
.
0
0
0
4
5
4
5
0
.
1
0
1
5
0
.
0
0
1
2
4
7
0
.
3
1
1
5
C
v
0
.
0
0
0
0
0
6
6
1
5
0
.
1
5
2
8
0
.
0
0
0
0
0
4
4
1
5
0
.
0
8
7
0
8
0
.
0
0
0
0
0
5
6
8
1
0
.
1
1
8
1
5
.
1
0
3
E
-
0
7
0
.
0
1
1
8
5
D
D
0
0
0
0
0
0
0
0
N
-
0
.
0
8
1
7
6
0
-
0
.
2
8
1
3
0
-
0
.
1
7
8
0
-
0
.
4
1
6
5
0
w
-
0
.
0
1
2
5
2
-
0
.
0
0
1
5
7
2
-
0
.
0
1
7
0
5
-
0
.
0
0
1
8
2
8
-
0
.
0
1
5
0
6
-
0
.
0
0
1
7
0
1
-
0
.
0
2
7
9
5
-
0
.
0
0
3
5
2
9
a
-
0
.
9
1
7
7
-
0
.
6
3
9
1
-
0
.
8
3
7
3
-
0
.
4
9
8
-
0
.
7
8
0
3
-
0
.
4
8
9
1
-
0
.
5
8
9
8
-
0
.
4
1
3
2
s
y
-
0
.
0
0
0
0
0
4
4
8
4
-
0
.
0
0
0
0
2
4
0
1
-
0
.
0
0
0
0
2
6
-
0
.
0
0
0
1
1
8
9
-
0
.
0
0
0
0
1
1
8
6
-
0
.
0
0
0
0
5
7
1
8
-
0
.
0
0
0
0
8
2
1
7
-
0
.
0
0
0
4
4
2
6
s
u
-
6
.
6
4
7
E
-
0
7
-
0
.
0
0
0
0
0
4
0
3
2
-
0
.
0
0
0
0
0
4
2
0
7
-
0
.
0
0
0
0
2
2
1
2
-
0
.
0
0
0
0
0
7
0
7
6
-
0
.
0
0
0
0
3
8
6
3
0
.
0
0
0
0
5
2
5
5
-
0
.
0
0
0
3
2
0
7
C
v
-
0
.
0
0
0
0
0
2
2
2
1
-
0
.
0
1
0
3
1
-
0
.
0
0
0
0
0
1
2
7
7
-
0
.
0
0
5
0
6
6
-
0
.
0
0
0
0
0
1
7
5
3
-
0
.
0
0
7
3
2
9
1
.
0
1
3
E
-
0
7
-
0
.
0
0
0
4
7
3
D
D
0
0
0
0
0
0
0
0
N
-
0
.
0
1
6
7
9
-
0
.
0
0
7
0
2
4
-
0
.
2
0
0
6
-
0
.
0
7
1
6
9
-
0
.
0
7
4
8
9
-
0
.
0
2
8
2
-
0
.
4
0
4
1
-
0
.
1
7
0
1
1
1
1
0
.
9
6
2
1
0
.
9
8
4
2
1
1
4
0
3
0
.
2
7
3
1
4
-
0
.
1
7
7
3
0
C
o
n
s
t
r
a
i
n
t
s
:
1
G
o
u
g
e

d
e
p
t
h

>

c
r
i
t
i
c
a
l
3
G
o
u
g
e

l
e
n
g
t
h

>

c
r
i
t
i
c
a
l
4
G
o
u
g
e

l
e
n
g
t
h

<

c
r
i
t
i
c
a
l
S E N S I T I V I T I E S O F
C O N S T R A I N T S
M E A N V A L U E SS T A N D A R D D E V I A T I O N S
9
.
0

M
P
a
7
.
0

M
P
a
L
E
A
K
L
E
A
K
R
U
P
T
U
R
E
R
U
P
T
U
R
E
1
2
9



G
e
o
m
e
t
r
i
c
a
l

M
a
t
e
r
i
a
l

D
e
f
e
c
t


w


y


u

C
v

a

N

(
L
)

D
D

(
D
)

M
e
a
n

h
i
g
h

m
e
d
i
u
m

m
e
d
i
u
m

m
e
d
i
u
m

m
e
d
i
u
m

m
e
d
i
u
m

n
o
n
e

S
t
a
n
d
a
r
d

D
e
v
i
a
t
i
o
n

l
o
w

l
o
w

l
o
w

l
o
w

h
i
g
h

m
e
d
i
u
m

n
o
n
e


T
a
b
l
e

2
.
8



R
e
l
a
t
i
v
e

I
m
p
o
r
t
a
n
c
e
s

o
f

M
e
a
n
s

a
n
d

S
t
a
n
d
a
r
d

D
e
v
i
a
t
i
o
n
s

o
f

B
a
s
i
c

V
a
r
i
a
b
l
e
s

1
3
0

T
a
b
l
e

2
.
9



B
a
s
e
l
i
n
e

A
n
a
l
y
s
e
s

a
t

7

a
n
d

9

M
P
a

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
s
.


S
e
n
s
i
t
i
v
i
t
e
s

a
n
d

E
l
a
s
t
i
c
i
t
i
e
s

o
f

D
e
t
e
r
m
i
n
i
s
t
i
c

V
a
r
i
a
b
l
e
s

S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y
S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y
S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y
S
e
n
s
i
t
i
v
i
t
y
E
l
a
s
t
i
c
i
t
y

0
.
2
8
0
6
0
.
0
5
8
6
9
0
.
7
0
7
6
0
.
1
2
6
4
0
.
9
4
3
9
0
.
1
7
7
7
2
.
5
8
9
0
.
5
4
4
7
E
8
.
1
4
9
E
-
0
7
0
.
0
7
0
5
7
5
.
4
9
2
E
-
0
7
0
.
0
4
0
6
2
7
.
0
3
8
E
-
0
7
0
.
0
5
4
8
6
6
.
4
6
8
E
-
0
8
0
.
0
0
5
6
3
4
C
v
0
-
0
.
0
0
0
0
0
1
5
3
2
-
0
.
0
7
1
9
9
-
0
.
0
0
0
0
0
1
0
3
3
-
0
.
0
4
1
4
5
-
0
.
0
0
0
0
0
1
3
2
4
-
0
.
0
5
5
9
8
-
1
.
2
1
7
E
-
0
7
-
0
.
0
0
5
7
3
5
A
-
0
.
0
0
3
1
5
-
0
.
0
7
0
5
7
-
0
.
0
0
2
1
2
3
-
0
.
0
4
0
6
3
-
0
.
0
0
2
7
2
1
-
0
.
0
5
4
8
7
-
0
.
0
0
0
2
5
0
2
-
0
.
0
0
5
6
3
9
b
0
.
5
2
5
9
0
.
1
0
8
9
0
.
3
4
9
4
0
.
0
6
1
8
0
.
4
5
0
4
0
.
0
8
3
9
5
0
.
0
4
0
0
1
0
.
0
0
8
3
3
5
P
-
0
.
0
6
8
2
6
-
0
.
1
9
9
9
-
0
.
0
9
6
7
1
-
0
.
2
4
1
9
-
0
.
0
7
7
7
1
-
0
.
2
6
3
3
-
0
.
1
4
6
8
-
0
.
5
5
6
1
R
-
0
.
0
0
0
9
7
2
-
0
.
1
8
5
9
-
0
.
0
0
1
2
3
8
-
0
.
2
0
2
3
-
0
.
0
0
1
3
7
2
-
0
.
2
3
6
2
-
0
.
0
0
2
5
4
5
-
0
.
4
8
9
6

L
7
.
3
4
9
0
.
1
3
2
2
2
7
.
6
9
0
.
4
2
5
5
1
6
.
7
8
0
.
2
7
1
7
4
1
.
8
9
0
.
7
5
8
1

L
0
.
0
6
2
1
3
0
.
6
4
5
6
0
.
2
0
7
2
1
.
8
9
3
9
0
.
1
3
4
5
1
.
2
5
8
0
.
2
9
5
9
3
.
0
9
3

L
-
0
.
0
5
1
2
1
-
0
.
6
4
5
4
-
0
.
1
6
5
1
-
1
.
7
7
8
-
0
.
1
0
9
4
-
1
.
2
4
1
-
0
.
2
3
1
2
-
2
.
9
3
1

D
0
0
0
0
0
0
0
0

D
0
0
0
0
0
0
0
0
7
.
0

M
P
a
9
.
0

M
P
a
L
E
A
K
R
U
P
T
U
R
E
L
E
A
K
R
U
P
T
U
R
E
131
Variable Importance
Load P medium
Geometrical R medium
E low

medium
C
v0
low
A low
Material
b low

L
high

L
high

L
high

D
none
Defect

D
none

Table 2.10 Relative Importances of Deterministic Variables



Leak Rupture
7.0 MPa 8.27 MPa 8.35 MPa 9.0 MPa
w -0.097 -0.104 -0.119 -0.140
a 2.495 2.372 2.287 2.113

y
-0.004 -0.011 -0.021 -0.037

u
-0.005 -0.012 -0.024 -0.042
C
v
-0.170 -0.142 -0.078 -0.013
D
D
0 0 0 0
N 0.205 0.455 0.762 0.970
Length 2.511 2.422 2.415 2.330

Table 2.11 Variable Values in U Space for Beta-Point Values Corresponding
to Different Internal Pressures
1
3
2



G
o
u
g
e

L
e
n
g
t
h

(
W
e
i
b
u
l
l

D
i
s
t
r
i
b
u
t
i
o
n
)

G
o
u
g
e

D
e
p
t
h

(
W
e
i
b
u
l
l

D
i
s
t
r
i
b
u
t
i
o
n
)

M
o
d
e
l

U
n
c
e
r
t
a
i
n
t
y

(
N
o
r
m
a
l

D
i
s
t
r
i
b
u
t
i
o
n
)

P
r
e
s
s
u
r
e

(
M
P
a
)

F
a
i
l
u
r
e

M
o
d
e

B
a
s
e
l
i
n
e

#
1

#
2

#
3

#
4

#
1

#
2

L
e
a
k

0
.
0
0
8
4
1

(
0
.
0
0
6
0
2
)

0
.
0
0
8
6
2

(
0
.
0
0
6
0
7
)

0
.
0
0
9
6
6

0
.
0
0
7
3
5

0
.
0
0
9
4
4

0
.
0
1
2
3
0

0
.
0
0
8
7
3

7
.
0

R
u
p
t
u
r
e

0
.
0
0
2
5
7

(
0
.
0
0
2
5
1
)

0
.
0
0
2
5
2

(
0
.
0
0
2
4
7
)

0
.
0
0
2
6
0

0
.
0
0
2
2
5

0
.
0
0
2
8
6

0
.
0
0
3
4
3

0
.
0
0
2
6
9

L
e
a
k

0
.
0
0
3
9
6

(
0
.
0
0
3
8
8
)

0
.
0
0
3
9
0

(
0
.
0
0
3
8
3
)

0
.
0
0
3
7
0

0
.
0
0
3
5
3

0
.
0
0
4
3
2

0
.
0
0
4
1
2

0
.
0
0
4
0
9

9
.
0

R
u
p
t
u
r
e

0
.
0
0
8
7
4

(
0
.
0
0
9
9
1
)

0
.
0
1
0
5
0

(
0
.
0
0
9
7
6
)

0
.
0
0
8
2
5

0
.
0
0
7
8
0

0
.
0
0
9
5
9

0
.
0
1
1
3
0

0
.
0
0
9
1
7

A
l
l

v
a
l
u
e
s

S
O
R
M

e
x
c
e
p
t

(

)

w
h
i
c
h

a
r
e

F
O
R
M


T
a
b
l
e

2
.
1
2



P
r
o
b
a
b
i
l
i
t
i
e
s

o
f

F
a
i
l
u
r
e

a
t

7

a
n
d

9

M
P
a

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
s
.


E
f
f
e
c
t
s

o
f

C
h
a
n
g
e
s

t
o

G
o
u
g
e

L
e
n
g
t
h
,

G
o
u
g
e

D
e
p
t
h

a
n
d

M
o
d
e
l

U
n
c
e
r
t
a
i
n
t
y

D
i
s
t
r
i
b
u
t
i
o
n
s


133


Figure 2.1 Illustrating the Transformation for a Non-Standard Probability
Density Function: Gouge Length
0 50 100 150 200 250 300 350 400 450 500
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
randomly generated histogram
Continuous function
Offset Logistic PDF
134


Figure 2.2 Illustrating the Transformation for a Non-Standard Probability
Density Function: Dent Depth
0 20 40 60 80 100
0
0.02
0.04
0.06
0.08
Continuous distribution
Randomly generated histogram
Bespoke Dent depth PDF
1
3
5


F
i
g
u
r
e

2
.
3



P
l
o
t
s

o
f

F
a
i
l
u
r
e

F
u
n
c
t
i
o
n

G

f
o
r

7

M
P
a

P
r
e
s
s
u
r
e

a
n
d

Z
e
r
o

D
e
n
t

D
e
p
t
h
0
1
2
3
4
5
6
7
8
9
1
0
1 0 1
L

=

0
L

=

1
2
0

m
m
L

=

1
4
0

m
m
L

=

1
6
0

m
m
L

=

1
8
0

m
m
L

=

2
0
0

m
m
L

=

2
2
0

m
m
P
r
e
s
s
u
r
e

=

7
.
0

M
P
a
,

L
c

=

2
1
6
.
6

m
m
G
o
u
g
e

d
e
p
t
h

a

(
m
m
)
F a i l u r e f u n c t i o n
136


Figure 2.4 Failure Assessment Diagrams, Dent Depth = 0
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 120 mm
L = 140 mm
L = 160 mm
L = 180 mm
L = 200 mm
L = 220 mm
Pressure = 7.0 MPa, Lc = 216.6 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

1
0

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 120 mm
L = 140 mm
L = 160 mm
L = 180 mm
L = 200 mm
Pressure = 8.0 MPa, Lc = 180.2 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

9
.
5

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 70 mm
L = 90 mm
L = 110 mm
L = 130 mm
L = 150 mm
Pressure = 11.0 MPa, Lc = 101.4 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

8
.
5

m
m
)
137

Figure 2.5 Failure Assessment Diagrams, Dent Depth = 5mm
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 120 mm
L = 140 mm
L = 160 mm
L = 180 mm
L = 200 mm
L = 220 mm
Pressure = 7.0 MPa, Lc = 216.6 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

8

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 120 mm
L = 140 mm
L = 160 mm
L = 180 mm
L = 200 mm
Pressure = 8.0 MPa, Lc = 180.2 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

7
.
5

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 70 mm
L = 90 mm
L = 110 mm
L = 130 mm
L = 150 mm
Pressure = 11.0 MPa, Lc = 101.4 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

6
.
5

m
m
)
138

Figure 2.6 Failure Assessment Diagrams, Dent Depth = 10mm
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 120 mm
L = 140 mm
L = 160 mm
L = 180 mm
L = 200 mm
L = 220 mm
Pressure = 7.0 MPa, Lc = 216.6 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

7

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 120 mm
L = 140 mm
L = 160 mm
L = 180 mm
L = 200 mm
Pressure = 8.0 MPa, Lc = 180.2 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

6

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 0
L = 70 mm
L = 90 mm
L = 110 mm
L = 130 mm
L = 150 mm
Pressure = 11.0 MPa, Lc = 101.4 mm
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

i
s

5

m
m
)
139


Figure 2.7 Generic Failure Diagram
K
r
S
r
1.0
1.0
L
r
= 0
L
r
= 1
I
n
c
r
e
a
s
i
n
g

L
r
I
n
c
r
e
a
s
i
n
g

a
LEAK
RUPTURE
S
r
140


Figure 2.8 Weibull Plot of Offset Logistic Distribution Data for Gouge Length
4 6 8 10 12
2
1
0
1
2
3
Weibull Plot
Ln(gouge length)
141
Figure 2.9 Weibull Fits to Parts of the Offset Logistic Distribution Data for Gouge
Length. Upper: Weibull plots; lower: cumulative distributions
0 100 200 300 400 500 600 700 800 900 1000
0
0.13
0.25
0.38
0.5
0.63
0.75
0.88
1
randomly generated OL
Weibull fit to first 400 mm
Weibull fit between 10 and 400 mm
Weibull fit
Gouge length (mm)
4 6 8 10 12
2
1
0
1
2
3
Randomly generated OL
Weibull fit to first 400 mm
Weibull fit 10 and 400 mm
Weibull Plot
Ln(gouge length)
142
Figure 2.10 Failure surface and Beta-Point Plots for Pressure of 7 MPa
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 114.7 mm Beta point length
L = 215.5 mm critical length
Beta point a = 9.05 mm
Pressure = 7.0 MPa, leak
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

=

9
.
5

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 215.3 mm Beta point length (critical)
Beta point a = 8.598 mm
Pressure = 7.0 MPa, rupture
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(

m
a
x

a

=

9
.
5

m
m
)
143
Figure 2.11 Failure Surface and Beta-Point Plots for Pressure of 9 MPa
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 300.04 mm Beta point length
L = 148.9 mm critical length
Beta point a = 6.671 mm
Pressure = 9.0 MPa, rupture
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

=

8

m
m
)
0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
FAD
L = 149.4 mm Beta point length (critical)
Beta point a = 7.89 mm
Pressure = 9.0 MPa, leak
Stress ratio Sr
F
r
a
c
t
u
r
e

r
a
t
i
o

K
r

(
m
a
x

a

=

9

m
m
)
144

Figure 2.12 Baseline Analyses of 7 and 9 MPa Internal Pressures. Sensitivities and
Elasticities of Means and Standard Deviations of Basic Variables
Pressure = 7.0 MPa
-1
-0.5
0
0.5
1
1.5
w a sy su Cv Dd N w a sy su Cv Dd N
MEAN VALUES STANDARD DEVIATIONS
LEAK Sensitivity
LEAK Elasticity
RUPTURE Sensitivity
RUPTURE Elasticity
Pressure = 9.0 MPa
-1
-0.5
0
0.5
1
1.5
w a sy su Cv Dd N w a sy su Cv Dd N
MEAN VALUES STANDARD DEVIATIONS
LEAK Sensitivity
LEAK Elasticity
RUPTURE Sensitivity
RUPTURE Elasticity
145

Figure 2.13 Baseline Analyses of 7 and 9 MPa Internal Pressure. Sensitivities and
Elasticities of Deterministic Variables
Sensitivity
-5
0
5
10
15
20
25
30
35
40
45
a
E
C
v
0
A
bPR
a
L
b
L
g
L
a
D
b
D
7.0 MPa LEAK
7.0 MPa RUPTURE
9.0 MPa LEAK
9.0 MPa RUPTURE
Elasticity
-4
-3
-2
-1
0
1
2
3
4
a
E
C
v
0
A
bPR
a
L
b
L
g
L
a
D
b
D
7.0 MPa LEAK
7.0 MPa RUPTURE
9.0 MPa LEAK
9.0 MPa RUPTURE
1
4
6

F
i
g
u
r
e

2
.
1
4



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e

(
F
O
R
M

a
n
a
l
y
s
e
s
)
E
f
f
e
c
t
s

o
f

A
n
a
l
y
s
i
s

T
y
p
e
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
R
u
p
t
u
r
e

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
1
4
7

F
i
g
u
r
e
s

2
.
1
5



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e

(
S
O
R
M

a
n
a
l
y
s
e
s
)
E
f
f
e
c
t
s

o
f

A
n
a
l
y
s
i
s

T
y
p
e
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
R
u
p
t
u
r
e

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
1
4
8

F
i
g
u
r
e

2
.
1
6



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e

(
F
O
R
M

a
n
d

S
O
R
M

a
n
a
l
y
s
e
s
)

E
f
f
e
c
t
s

o
f

A
n
a
l
y
s
i
s

T
y
p
e
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
R
u
p
t
u
r
e

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
L
e
a
k

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
R
u
p
t
u
r
e

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
1
4
9

F
i
g
u
r
e

2
.
1
7



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
:

E
f
f
e
c
t
s

o
f

S
w
i
t
c
h
i
n
g

I
n
a
c
t
i
v
e

C
o
n
s
t
r
a
i
n
t
s

O
f
f

o
r

O
n
;

F
O
R
M

E
f
f
e
c
t
s

o
f

A
n
a
l
y
s
i
s

T
y
p
e

-

I
n
a
c
t
i
v
e

C
o
n
s
t
r
a
i
n
t
s

O
F
F

o
r

O
N
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C

O
F
F
R
u
p
t
u
r
e

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C

O
F
F
L
e
a
k

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C

O
N
R
u
p
t
u
r
e

-

F
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C

O
N
1
5
0

F
i
g
u
r
e

2
.
1
8



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
:

E
f
f
e
c
t
s

o
f

S
w
i
t
c
h
i
n
g

I
n
a
c
t
i
v
e

C
o
n
s
t
r
a
i
n
t
s

O
f
f

o
r

O
n
;

S
O
R
M
E
f
f
e
c
t
s

o
f

A
n
a
l
y
s
i
s

T
y
p
e

-

I
n
a
c
t
i
v
e

C
o
n
s
t
r
a
i
n
t
s

O
F
F

o
r

O
N
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C

O
F
F
R
u
p
t
u
r
e

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C
O
F
F
L
e
a
k

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C

O
N
R
u
p
t
u
r
e

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c

-

I
C

O
N
1
5
1

F
i
g
u
r
e

2
.
1
9



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
:

E
f
f
e
c
t
s

o
f

C
h
a
n
g
e
s

t
o

D
i
s
t
r
i
b
u
t
i
o
n

f
o
r

G
o
u
g
e

L
e
n
g
t
h
E
f
f
e
c
t
s

o
f

A
n
a
l
y
s
i
s

T
y
p
e

a
n
d

G
o
u
g
e

L
e
n
g
t
h
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
R
u
p
t
u
r
e

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
L
e
a
k

-

F
O
R
M

-

W
e
i
b
u
l
l

#
1
R
u
p
t
u
r
e

-

F
O
R
M

-

W
e
i
b
u
l
l

#
1
l
e
a
k

-

S
O
R
M

-

W
e
i
b
u
l
l

#
1
R
u
p
t
u
r
e

-

S
O
R
M

-

W
e
i
b
u
l
l

#
1
1
5
2

F
i
g
u
r
e

2
.
2
0



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
:

E
f
f
e
c
t
s

o
f

C
h
a
n
g
e
s

t
o

D
i
s
t
r
i
b
u
t
i
o
n

f
o
r

G
o
u
g
e

L
e
n
g
t
h
E
f
f
e
c
t
s

o
f

G
o
u
g
e

L
e
n
g
t
h
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
R
u
p
t
u
r
e

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
L
e
a
k

-

S
O
R
M

-

W
e
i
b
u
l
l

#
2
R
u
p
t
u
r
e

-

S
O
R
M

-

W
e
i
b
u
l
l

#
2
l
e
a
k

-

S
O
R
M

-

W
e
i
b
u
l
l

#
1
R
u
p
t
u
r
e

-

S
O
R
M

-

W
e
i
b
u
l
l

#
1
1
5
3

F
i
g
u
r
e

2
.
2
1



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
:

E
f
f
e
c
t

o
f

C
h
a
n
g
e
s

t
o

D
i
s
t
r
i
b
u
t
i
o
n

f
o
r

G
o
u
g
e

D
e
p
t
h
E
f
f
e
c
t
s

o
f

G
o
u
g
e

D
e
p
t
h
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

S
O
R
M

-

W
e
i
b
u
l
l

o
r
i
g
i
n
a
l
R
u
p
t
u
r
e

-

S
O
R
M

-

W
e
i
b
u
l
l

o
r
i
g
i
n
a
l
L
e
a
k

-

S
O
R
M

-

W
e
i
b
u
l
l

#
3
R
u
p
t
u
r
e

-

S
O
R
M

-

W
e
i
b
u
l
l

#
3
l
e
a
k

-

S
O
R
M

-

W
e
i
b
u
l
l

#
4
R
u
p
t
u
r
e

-

S
O
R
M

-

W
e
i
b
u
l
l

#
4
1
5
4

F
i
g
u
r
e

2
.
2
2



V
a
r
i
a
t
i
o
n
s

o
f

L
e
a
k

a
n
d

R
u
p
t
u
r
e

F
a
i
l
u
r
e

P
r
o
b
a
b
i
l
i
t
i
e
s

w
i
t
h

I
n
t
e
r
n
a
l

P
r
e
s
s
u
r
e
:

E
f
f
e
c
t
s

o
f

C
h
a
n
g
e
s

t
o

M
o
d
e
l

U
n
c
e
r
t
a
i
n
t
y
E
f
f
e
c
t
s

o
f

M
o
d
e
l

U
n
c
e
r
t
a
i
n
t
y
1
.
0
0
E
-
0
3
1
.
0
0
E
-
0
2
1
.
0
0
E
-
0
1
1
.
0
0
E
+
0
0
6
7
8
9
1
0
1
1
1
2
P
r
e
s
s
u
r
e

(
M
P
a
)
P r o b a b i l i t y o f F a i l u r e
L
e
a
k

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
R
u
p
t
u
r
e

-

S
O
R
M

-

O
f
f
s
e
t

l
o
g
i
s
t
i
c
L
e
a
k

-

S
O
R
M

-

M
o
d
e
l

u
n
c
e
r
t
a
i
n
t
y

#
1
R
u
p
t
u
r
e

-

S
O
R
M

-

M
o
d
e
l

u
n
c
e
r
t
a
i
n
t
y
#
1
L
e
a
k

-

S
O
R
M

-

M
o
d
e
l

u
n
c
e
r
t
a
i
n
t
y

#
2
R
u
p
t
u
r
e

-

S
O
R
M

-

M
o
d
e
l

u
n
c
e
r
t
a
i
n
t
y
#
2
155
3. DISCUSSION AND CONCLUSIONS
3.1 INTRODUCTION
The purpose of this section of the annex is two-fold:
To cross-reference aspects of the guidelines for reliability analysis given in Section 9
of the main report to areas that have been addressed in the specific example in this case
study.
To draw out discussion points and conclusions specific to the problem considered.
These two aspects are covered in the next two subsections.
3.2 CORRESPONDENCE WITH GUIDELINES
3.2.1 Preamble
Section 9 of the main body of this report gives nine Level 1 mandatory guidelines. These are
the most important requirements that should be present in any reliability analysis, or reliability-
based risk analysis.
The focus of this case study has been on the reliability analysis of leak and rupture of pipes
under internal pressure and containing dents and / or gouges stemming from external
interference. The probabilities calculated are conditional on the external interference having
already occurred; thus events leading to the damage have not been considered. Furthermore, no
consideration has been given to the consequences of the failure and, hence, risk.
The purpose of this subsection is to correlate aspects of work done in the case study to the
guidelines. However, given the limits on the scope of the case study mentioned above, the
following two guidelines are omitted from this correlation:
Have all the consequences of failure been adequately considered?
Does the stated acceptance criterion represent a reasonable and responsible level of
safety, and, bearing in mind the confidence in the reliability analysis, is it adequately
satisfied?
Each of the remaining seven guidelines is discussed in relation to the case study in separate
subsections below.
3.2.2 Problem Definition
The failure events that have been considered are clearly defined as stemming from external
interference resulting in a gouge or a gouge with a dent. These then lead to a through-thickness
defect that if of insufficient length results in a leak, or if the length if large enough, a rupture.
It is clear that the events form part of a sequence initiated by external interference and resulting
in failure. The last event in the failure sequence is covered by the case study.
Two types of "failure" are considered: leak and rupture. A rupture, without question, would be
considered as an ultimate limit state. A leak is harder to classify as this may depend on
consequences and hence on risk; volume of lost inventory, or rate of loss, may be governing
156
factors. Many of the definitions of limit state given in Subsections 3.4.2.1 to 3.4.2.7 of the main
report tend to point to leak as being a serviceability limit state. The source document classifies
leak as an ultimate limit state.
Reference period does not enter into the reliability analysis considered here. The probability
that the external interference occurs should include reference period / pipeline length
information. The document quotes a probability for external interference leading to a dent /
gouge defect of 1.86 x 10
-3
per km-year; the probabilities of leak or rupture calculated in this
case study would be multiplied by this figure.
3.2.3 Problem Analysis
It is not suggested that the case study represents a complete solution to the totality of the
problem. The objective has been to consider in detail a small subset involving a single hazard:
that of external interference, along with the potential resulting defect. The aim was to compare
the relative risk as a result of the change in operating pressure.
The view taken in the source document (and justified by qualitative and semi-qualitative
deterministic arguments reported elsewhere) is that the two most significant hazards are
considered: damage due to external interference and damage due to external corrosion.
Moreover, the most significant failure modes for these events have been incorporated, including
time-varying effects associated with corrosion.
3.2.4 Failure Function Modelling
Two failure functions underpin the reliability analysis performed in the case study and are taken
from the source document. As set out in Subsection 2.2 of this annex, these correspond to
treatment of a defect in a:
Through-thickness sense
Lengthwise sense.
The expressions for treatment of a defect in a through-thickness sense are based on widely
accepted and sound fracture mechanics principles, that are also used elsewhere in this subject
area by other practitioners [3].
Similarly, a raft of sound technical background from the authors of the source report and other
researchers underpins the expression for treatment of defects in a lengthwise sense. Therefore
this may also be accepted as industry practice.
However, perhaps a note of caution needs to be sounded given the discussion of this failure
function concerning the apparent absence of accounting for longitudinal fracture effects. In
terms of adherence to the guidelines, this may raise a question mark against the adequacy of the
function for predicting failure which would need to be resolved in a wider situation than this
case study.
All the terms in the failure functions are clearly defined in Subsections 2.2 and 2.3, above.
Tables 2.1 and 2.2 support this, the latter being useful insofar that it identifies which variables
occur in which failure functions, enabling commonalities to be readily identified.
The accuracy of the failure functions has not been addressed in this case study and has been
taken as-given for the purposes of the work performed here. This question could presumably be
157
dealt with by means of the reference material supporting the source report. The question of
accuracy is also related to model uncertainty, a topic discussed in the next subsection.
The validity of the failure functions used, and any physical limitations thereof are implicit
within the source report and are taken as given from the point of view of the case study
performed here. The reference material for the source report would presumably cover such
issues. An example of an issue might be that if the failure functions are empirically derived
using data from tests performed, say, thirty years ago, would the failure functions be applicable
to modern materials, fabrication and production techniques, and so forth?
A principal part of the case study has been to provide a complete understanding of the behaviour
and interaction of the failure functions. This is particularly with respect to the effects of
changes in values to the defect variables: dent depth, gouge depth and gouge length.
3.2.5 Basic Variable Modelling
Basic and deterministic variable modelling is justified in the source document. In the case
study, the reliability analyses carried out has allowed the relative importances of deterministic
and basic variables to be identified via Figures 2.12 and 2.13, along with Tables 2.7 to 2.10.
The results have justified a posteriori that some variables may be taken as deterministic, and
have identified that dent depth could be legitimately omitted from the analysis without changing
the probabilities calculated.
The types of distribution used for the basic variables and their bases have been justified and
adequately defined in the source document. Correlations between variables have not been
considered.
No modelling uncertainty is considered in the source document. It is presumed that the
reference material cited there could be used to develop and justify accurate model uncertainties.
For the purposes of illustration in the case study, a model uncertainty on the through-thickness
behaviour failure function has been postulated and used to test the robustness of the
probabilities calculated.
3.2.6 Reliability Analysis Methodology
In the case of the source document, direct integration is used to compute probabilities of failure.
For the computations in the case study, depending on the analysis performed, both FORM and
SORM have been used; Monte Carlo simulation techniques have also been used to investigate
some specific areas in more detail.
The calculations have highlighted, in the particular instances of the analyses in the case study,
that significant differences in calculated probabilities may occur between FORM and SORM.
Because SORM analysis accounts for the curvature of the failure surface, it usually gives a more
accurate answer than first-order linear methods. However, in this particular case SORM
analysis has been shown to be far less accurate than FORM.
Monte Carlo analyses have been undertaken using two widely accepted Variance reduction
techniques to increase the efficiency of the method. Enough samples have been taken such that
the results tend to the true failure probability (as defined by the mathematics of the problem).
The results for the total probability of failure from leak or rupture are shown in Figure 3.1. The
figure shows that the FORM results correspond reasonably well with the results of the Monte
Carlo analyses, but the SORM results are erroneous in this case.
158
0.001
0.01
0.1
1
6 7 8 9 10 11
Pressure (MPa)


P
r
o
b
a
b
i
l
i
t
y

o
f

F
a
i
l
u
r
e


.
FORM
SORM
Importance sampling
Adaptive sampling

Figure 3.1 Comparison of different reliability analysis methods
A commercial reliability analysis program SYSREL has been used for the case study. It is
important to emphasise that this can perform component (single constraint) and / or system
(multiple constraints, unions, intersections, and so forth) reliability calculations. Rigorous
analysis of the problem requires system-type calculations because it has two constraints: related
to gouge depth and gouge length. The two constraints are needed to evaluate the probability of
a leak and the probability of a rupture.
However, the total probability of failure by either a leak or a rupture can be evaluated by
considering the one constraint based on gouge depth.
Independent checks of failure probabilities have been performed for this constraint using an in-
house, spreadsheet-based program; good agreement is obtained with the SYSREL FORM
calculations of failure probabilities associated with the governing failure mode.
The robustness of solution in the case study has been investigated via parameter and sensitivity
studies. These aspects are discussed in relation to the next guideline in the following
subsection.
3.2.7 Sensitivity Analyses
In the case study, values of the variables at the beta-points are presented in, for example, Table
2.7. These are for internal pressures of 7 and 9 MPa, and have been verified as physically
feasible by plotting them on the failure surface, and establishing their relationship with critical
gouge length (Figures 2.10 and 2.11). This also helps to check and understand any intersections
that occur as solutions to the problem.
159
Deterministic and basic variable sensitivities are presented for the case study (Figures 2.13 and
2.12, and Table 2.6, 2.9 and 2.7). The relative importance (ranking) of variables is much as
expected, with gouge depth and length being paramount.
Parametric (on pressure) and sensitivity studies (on distribution types and characterising
parameters of basic variables identified as of importance) have been performed in the case
study. A full discussion in relation to this is given in Subsection 3.3, below. There is sufficient
confidence in the modelling of most variables, with the exception of modelling uncertainty and
gouge length (see full discussion). These latter two variables may have a significant bearing on
robustness of the solutions.
3.2.8 Analysis Outcome Validity
The guideline in this particular instance states:
Are there still [having addressed all the preceding guidelines and obtained satisfactory
outcomes] doubts or misgivings about the validity of the outcome of the analysis?
and asks the assessor to:
Consider the question is it likely that a competent engineer with knowledge of
reliability analysis would have achieved a different outcome?
Initially, the case study selected was considered to involve a relatively straightforward exercise,
and the investigators put themselves in the position of implementing a public-domain
methodology. Thus it is judged to have clearly defined failure functions, basic variable
distributions and so forth. In taking this approach, the case study is envisaged as a matter of
implementing the methodology in the selected reliability analysis software; running the
particular required analyses; and obtaining, processing and discussing the results.
In the event, the case study (as a result of a more than average amount of scrutiny and inquiry)
has thrown up a number of unresolved issues. These are discussed in more detail in Subsection
3.3, below. In short, the answer to the question posed in the guidelines, is that there still are
some doubts and misgivings about the validity of the outcome of the analyses. However, it is
important to point out that some of these doubts and misgivings would not have surfaced had,
for example, analyses been confined to two fixed values of internal pressure of 7 and 9 MPa.
3.3 PROBLEM SPECIFICS
3.3.1 FORM versus SORM
In a great many cases of reliability analyses the differences between failure probabilities
predicted using FORM and SORM will be small. Nevertheless, it is advisable in any problem
to confirm this or otherwise by performing both types of analyses. For practical cases
FORM analysis usually provides sufficiently accurate estimates of failure probability.
The present case study has highlighted this area. Both FORM and SORM analyses were carried
out for the two pressure values of 7 and 9 MPa. When the intriguing results for pressure
variation were obtained, it was judged to be of value to investigate further. Whilst this exercise
has not completely resolved the difficulties, it has helped to identify one of the possible causes
of the unusual results.
160
Regardless of whether just FORM, SORM, or both analysis types are carried out, the case study
has identified the possible inadequacies stemming from simply considering two isolated values
of pressure. The fact that the order of probabilities switch (leak highest for a pressure of 7 MPa
and rupture lowest; with rupture highest at 9 MPa), ie. the governing failure mode changes with
pressure, alerts one to the benefit of carrying out a parameter study with respect to pressure.
3.3.2 Importance of Pressure Variation
The parameter study on pressure variation has highlighted several important features of the
problem, as set out below.
(a) Switching of governing failure mode
The governing failure mode (the one, out of leak and rupture, that has the highest failure
probability) changes with internal pressure. At low pressures leak governs, at high pressures
rupture; knowledge of this fact may be important because the consequences of these failures
may be different, with different associated risks. The switchover takes place at a discontinuity,
rather than as a result of two curves crossing over one another.
Jumps of this sort occur when constraints, or failure criteria, activate or de-activate as the
parameter undergoing investigation changes. Meticulous and painstaking plotting of beta-points
onto failure surfaces has enabled this mechanism to be confirmed for this case study.
(b) Sensitivity of governing failure mode to SORM
If SORM analysis is used, in conjunction with the as-given distributions for basic variables,
significant changes to the probabilities of failure for the governing mode occur, as compared
with the values obtained from FORM analyses. The SORM results show that there appears to
be a "hotspot" where probabilities of failure rise extremely rapidly, possibly leading to
unacceptable high values within a "spike" of probability. The size of the spike, its character
("peakiness") and the pressure at which it occurs, appear to depend on the types of distributions
used for basic variables and their means, standard deviations and so forth. Intense peakiness
seems to be associated with the offset logistic distribution used for gouge length.
The reasons for the presence of this hotspot in the SORM results are not clear; it is possible that
there may be some highly curved region of the failure surface, that sits close to the origin in
basic variable space and exists under certain combinations of variables. Alternatively, there
may be some aberration, or mathematical singularity, in one or more of the failure functions.
This is, probably, the least likely possibility because such a singularity is likely to have been
picked up via some of the deterministic analyses, or through other investigator's use of the same
failure function. Nevertheless, if this is the case, then it may be that there are restricted ranges
of variables for which the failure functions are valid.
The Monte Carlo results do not exhibit a spike in probability, and are similar to the FORM
results. This is comforting and confirms that the problem is with the numerics of SORM, rather
than the behaviour of the mathematical functions used to describe the mechanism, or worse
some physical attribute or instability of real pipes that occurs at specific pressures.
3.3.3 Sensitivity and Robustness of Analysis Outcomes
From the specific analyses at 7 and 9 MPa, the relative importances of variables in respect of
calculated failure probabilities have been identified. These sorts of results justify a posteriori
the fact that some as-given variables are defined to be deterministic.
161
A high importance is attached to the mean of the normal distribution used for pipe wall
thickness. However, a low priority can be assigned to this over other important variables
because, with the availability of a wealth of manufacturing data along with good quality control
in production, little contentiousness is likely to be attached to the statistics of wall thickness.
Otherwise, it should come as no surprise that the most important variables are those associated
with the statistics of the defects: gouge length and gouge depth. Dent depth assumes no
importance whatsoever owing to the nature of its distribution; this is constructed so that 82% of
gouges are associated with zero dent depth, and only 18% of incidents of damage involve a
gouge in combination with a dent. Hence, it is not unexpected that all the reliability analyses
performed lead to failure probabilities associated with dentless gouges.
With regard to gouge length, replacement of the offset logistic distribution with what might be
judged as a reasonable Weibull substitute leads to no significant changes to failure probabilities
predicted by FORM. SORM results for the governing failure mode are more profoundly
affected however. As indicated above, the peakiness of the spike, and the pressure at which it
occurs, are affected by the type of distribution used for gouge length and the values ascribed to
its statistical parameters. Any such changes do not seem to affect calculated failure
probabilities at pressures sufficiently removed from the values at which the spike occurs.
The principal effects of changes to the distribution for gouge depth relate to changes in the value
of pressures at which the change in dominant failure mode occurs. Changes to the calculated
probabilities at pressures other than these may also occur.
There may also be a question mark against the SYSREL analysis options: whether the switching
off (the default) or on of an inactive constraint has a bearing on the calculated probabilities. In
FORM terms, the sensitivity may be significant; in SORM terms less so. Although in both
cases, the differences are attributable to the pressure "hotspot".
Model uncertainty is an important and significant effect. Usual practice is to assign a normal or
lognormal distribution to this, with a mean and standard deviation determined from
experimental measurements, and coupled to the efficacy of the theoretical predictions. The
absence of such data in the source document presents a problem, and the omission of model
uncertainty a serious difficulty in relation to confirming the robustness of analysis outcomes.
The data used for model uncertainty in this case study is postulated, but comes from a similar
methodology used to predict the bursting pressure of gouged and dented pipes [3]. The effects
of model uncertainty, and the changes in the magnitude of its characterising variables, appear to
be more serious than changes to the types of variables described above. In this case study,
effects are felt both in terms of the pressure at which the spike in the pressure-failure probability
curve for the governing mode occurs, and in terms of values of failure probability for pressure
outwith the jump / spike region.
Finally for this case study, and considering only the governing failure mode, issues of
robustness of analysis outcomes relate to:
Whether, at particular pressure(s) of interest, calculated failure probabilities are likely
to be significantly affected by reasonable changes to the values of parameters
characterising key input variables.
162
"Pressures of Interest" may include those within and those outwith the jump / spike region.
Thus, whether the analysis outcomes can be judged as robust in this case study is very strongly
dependent on verifying the spike as a true phenomenon, and on the veracity of model
uncertainty data.
3.4 CONCLUSIONS
The design pressure for pipelines has been developed historically on the basis of past
performance. It is very difficult to amass evidence to rationally argue and defend a change in
historical safety levels, and it is very difficult and time-consuming process to change design
practice. However, if probabilistic arguments can be used to show that an increase in operating
pressure does not significantly increase the overall risk from the pipeline, then this is clearly a
valuable use of probabilistic methods.
However, the report on which this case study is based only looked at two pipeline operating
pressures the existing pressure and the proposed uprated pressure. More detailed analysis in
this case study identified a change in governing failure mode from leak to rupture which
influences the consequence of failure. Some unexpected behaviour in failure probability at
intervening pressures was found using SORM, and this has been subsequently shown to be
erroneous by Monte Carlo simulation; SORM analysis usually gives a more accurate estimate of
the failure probability than FORM because the curvature of the failure surface is accounted for.
Further work is being undertaken to understand the reasons for this unusual behaviour.
The case study has therefore been very useful in highlighting these areas of the problem and
demonstrating the need for careful investigation when such approaches are relied upon, as set
out in the Guidelines in Section 9 of the main report.
163
4. REFERENCES
1. Edwards A M, Espiner R J & Francis A. 'Example of the application of limit state,
reliability and risk based design to the uprating of an onshore pipeline'. BG
Technology Report No. R3125, Issue No 2, 6 September 1999.
2. R C P Consult. 'COMREL and SYSREL Users Manual', RCP GmbH, Barer Strasse
48/111, 80799, Mnchen, 1996.
3. Yong Bai & Ruxin Song. 'Fracture assessment of dented pipes with cracks and
reliability based calibration of safety factor'. Draft paper submitted to the International
Journal of Pressure Vessels and Piping. Private Communication.

164
165
ANNEX B
CASE STUDY 2
OVERVIEW OF DRAFT EUROCODE prEN 13445-3 -
Unfired Pressure Vessels Part 3: Design
166
CONTENTS

Page No.
1. INTRODUCTION AND OUTLINE 169
1.1 INTRODUCTION 169
1.2 OUTLINE OF THIS ANNEX 169
2. MAIN PHILOSOPHY OF DOCUMENT 171
2.1 PREAMBLE 171
2.2 DESIGN BY FORMULA (DBF) 171
2.3 DESIGN BY ANALYSIS (DBA) 172
2.3.1 Routes Available 172
2.3.2 Direct Route for DBA 172
2.3.3 Stress Categorisation route for DBA 174
3. BASIC DESIGN CRITERIA AND PARAMETERS 175
3.1 PREAMBLE 175
3.2 LOADINGS 175
3.2.1 Pressure 175
3.2.2 Temperature 175
3.2.3 Pressure-Temperature Combinations 175
3.2.4 Implied Partial Safety Factor on Pressure/Temperature in DBF 175
3.3 PRESSURE VESSEL THICKNESS 175
3.3.1 Overall Thickness and Possible Source of Uncertainty 175
3.3.2 Corrosion Allowance 176
3.3.3 Nominal Thickness 176
3.3.4 Analysis Thickness 176
3.4 NOMINAL DESIGN STRESS 176
3.5 WELDS 177
3.5.1 Governing Welded Joints 177
3.5.2 Weld Joint Coefficient 178
4. SHELLS UNDER INTERNAL PRESSURE 179
4.1 PREAMBLE 179
4.2 CYLINDRICAL & SPHERICAL SHELLS 179
4.2.1 General 179
4.2.2 Cylindrical Shells 179
4.2.3 Spherical Shells 179
4.3 PRESSURE VESSELS WITH RECTANGULAR SECTION 180
4.3.1 Types 180
4.3.2 Unreinforced Vessels 180
4.3.3 Reinforced Vessels 181
5. SHELL ENDS 182
5.1 PREAMBLE 182
5.2 DISHED ENDS 182
167
5.2.1 Hemispherical Ends 182
5.2.2 Torispherical Ends 182
5.3 CONES & CONICAL ENDS 183
5.3.1 Conical Shells 183
5.3.2 Cone/shell Junctions 183
5.4 FLAT ENDS 185
5.4.1 Types 185
5.4.2 Unpierced Circular Flat Ends Welded to Cylindrical Shells 185
5.4.3 Unpierced Circular Flat Ends Bolted to Cylindrical Shells 186
5.4.4 Pierced Circular Flat Ends 186
5.5 BOLTED DOMED ENDS 186
5.5.1 Types 186
5.5.2 Dome Thickness 186
6. OPENINGS 188
6.1 PREAMBLE 188
6.2 NOZZLES WHICH ENCROACH INTO THE KNUCKLE REGION 188
6.3 OPENINGS IN SHELLS 189
6.3.1 General 189
6.3.2 Isolated Openings 189
6.3.3 Multiple Openings 190
6.3.4 Openings Close To A Shell Discontinuity 190
6.4 OPENINGS IN PRESSURE VESSELS WITH RECTANGULAR
SECTION 190
7. FATIGUE 191
7.1 PREAMBLE 191
7.2 SIMPLIFIED FATIGUE ASSESSMENT 191
7.2.1 Overall Procedure 191
7.2.2 Stress Ranges 191
7.2.3 Fatigue Design Curves and Joint Classification 192
7.2.4 Assessment Rule 194
7.3 DETAILED FATIGUE ASSESSMENT 194
8. DIRECT ROUTE FOR DBA 195
8.1 PREAMBLE 195
8.2 CHARACTERISTIC VALUES 195
8.2.1 Actions 195
8.2.2 Resistance 196
8.3 PARTIAL SAFETY FACTORS 197
8.3.1 Actions 197
8.3.2 Resistance 197
8.4 FAILURE MODES/LIMIT STATES 199
8.4.1 General 199
8.4.2 Ductile Rupture / Gross Plastic Deformation 200
8.4.3 Progressive Plastic Deformation 201
8.4.4 Instability 202
8.4.5 Fatigue Failure 202
8.4.6 Static Equilibrium 202
168
9. STRESS CATEGORIZATION ROUTE FOR DBA 203
9.1 PREAMBLE 203
9.2 REPRESENTATIVE STRESSES 203
9.2.1 Elementary Stresses 203
9.2.2 Equivalent Stress 203
9.2.3 Equivalent Stress Range 203
9.3 STRESS DECOMPOSITION 204
9.3.1 Preamble 204
9.3.2 Membrane 204
9.3.3 Bending 204
9.3.4 Linearised 204
9.3.5 Non-Linearised 204
9.4 STRESS CLASSIFICATION 204
9.5 PROCEDURE 205
9.5.1 Preamble 205
9.5.2 Determination of Equivalent Stresses and Stress Ranges 205
9.5.3 Assessment of Equivalent Stresses and Stress Ranges 206
10. RELIABILITY IMPLICATIONS 209
10.1 PREAMBLE 209
10.2 DESIGN BY FORMULA (DBF) 209
10.2.1 DBF Process 209
10.2.2 Reliability Implications of DBF 210
10.3 DESIGN BY ANALYSIS (DBA) 211
10.3.1 DBA Process 211
10.3.2 Reliability implications of DBA 212
10.4 CLOSURE 214

169
1. INTRODUCTION AND OUTLINE
1.1 INTRODUCTION
The purpose of this Annex is to present a summary overview of the draft British
Standard/European Standard for the design of pressure vessels:
prEN 13445-3, July 1999. Unfired Pressure Vessels Part 3: Design.
The reason for considering this code in the context of the present Project is that alternative
methods for pressure vessel design are being proposed. Traditionally, pressure vessels have
been designed using working stress or allowable stress methods; with the introduction of this
European code, limit state methods are being introduced, as is the direct use of experimental
techniques.
These alternative design methods are based on different philosophies. Clearly, there are likely
to be differences that feed through to the final pressure vessel design if these alternative
methodologies are followed, and there may be the potential for a significant change in safety
levels.
1.2 OUTLINE OF THIS ANNEX
In order to identify the potential differences, the code is first reviewed in detail. The
implications are then discussed in Section 10.
For the purposes of this overview, and to facilitate the requirements of this project, the contents
have been formed and ordered in a manner that is different to the way that the contents of the
draft standard are laid out. This has meant combining some areas of different Clauses under the
same heading in this overview. In addition, since the main thrust of the present project is in
pressure containment and for other practical reasons, certain parts of the standard are omitted
from this overview. These include the following:
Clause 8: shells under external pressure
Clause 11: flanges
Clause 13: heat exchanger tube sheets
Clause 14: Expansion bellows
Clause 16 non-pressure loads
Annexes A and D to N, inclusive.
Thus the focus is on the structural aspects of pressure containment for pressure vessels.
The main philosophical thrust of the draft standard is dealt with first, followed by reviews of the
clauses that deal with basic design criteria and parameters, shells under internal pressure, shell
ends, openings, and fatigue. The two sections following these are concerned with the design by
analysis (DBA) annexes of the draft standard, those dealing with the direct route and the stress
categorisation route for DBA.
170
The breakdown of these sections (in terms of Clauses of the Draft Standard) is as shown in
Table 1.1.
Section of this overview Clauses in Draft Standard
2 Main Philosophy of Document All, in outline
3 Basic Design Criteria and Parameters Clauses 5 and 6
4 Shells Under Internal Pressure Clauses 7 [part] and 15 [part]
5 Shell Ends Clauses 7 [part], 10 and 12
6 Openings Clause 7 [part], 9 and 15 [part]
7. Fatigue Clauses 17 and 18
8. Direct Route for Design by Analysis Annex B (informative)
9. Stress Categorisation Route for Design by
Analysis
Annex C (informative)
Table 1.1 Coverage of the Clauses of prEN 13445-3:1999 by this Overview

171
2. MAIN PHILOSOPHY OF DOCUMENT
2.1 PREAMBLE
Broadly, the draft Standard allows the design of pressure vessels to be carried out in three
alternative ways:
design by formula (DBF)
design by analysis (DBA)
experimental techniques.
DBA or experimental techniques can be used to supplement or replace DBF; however,
experimental techniques will not figure in this overview.
The majority of the standard (Clauses 6 to 18) concerns itself with DBF. The rules given in
Clauses 6 to 16 will provide satisfactory designs for pressure loading of a predominantly non-
cyclic nature. For response to cyclic loading i.e. fatigue assessment the calculations must
be performed according to the requirements of Clauses 17 and 18. Clause 17 is confined to
pressure fluctuations; cyclic loads other than pressure must be dealt with under the provisions of
Clause 18.
The rules for DBA are given in two informative Annexes B and C. Annex B deals with the
design route for DBA proper, whereas Annex C covers the use of the stress categorisation route
within DBA (i.e. a less general situation).
In situations where a component is subjected to a loading other than pressure, or no rules are
supplied in Clauses 6 to 18, then the designer is obliged to follow the provisions of DBA. This
may be summarised as follows:
Loading Type Design Options Available
Non-cyclic pressure DBF & DBA
Cyclic pressure DBF
Other non-cyclic loadings DBA
other cyclic loadings DBF
Table 2.1 Options for Design According to prEN 13445-3:1999
DBF and DBA are subjected to outline review in the next two subsections. The more detailed
reviews are given in further sections related to particular components, or loadings as set out in
Table 1.1.
2.2 DESIGN BY FORMULA (DBF)
Within the Clauses that relate to DBF in the Draft Standard, simple (and, occasionally, not-so-
simple) formulae are given to compute the quantities of interest in the design, e.g.
172
thickness of a cylindrical shell under internal pressure
end thicknesses
fatigue life.
In general, for calculations that involve considerations of strength, the known parameters used
are the applied pressure and a material nominal design stress. The required dimension(s) of the
component are the unknown parameters. Provisions may also be given (essentially as
rearrangements of the formulae for dimension(s)) for rating components: i.e. determining the
maximum permissible pressure, P
max
given fixed dimensions. This latter mode of formulation
can be interpreted as the resistance of the component in normal structural terminology.
Fatigue assessment is based around combining stress ranges, standard details and corresponding
S-N curves, along with the Palmgren-Miner cumulative damage rule to determine the fatigue
life of a given component.
2.3 DESIGN BY ANALYSIS (DBA)
2.3.1 Routes Available
Within the Draft Standard, two routes through DBA are offered:
direct route
stress categorisation route.
2.3.2 Direct Route for DBA
The direct route through DBA is dealt with in Annex B of the draft Standard. DBA is intended
to be perfectly general, providing rules for the design of any component under any action. The
detailed overview is given in Section 8 of this document, what follows here is an outline.
The Annex is written in a very philosophical manner, but in essence appears to be taking a limit
state, partial safety factor approach to design. The method comprises the following stages:
(a) specify the relevant failure modes and limit states, taking into account the loading
types
(b) specify the principle
(c) select an appropriate application rule
(d) carry out a design check using the principle(s) and the application rule(s)
(e) if the principle is not satisfied, repeat the design check using amended loading,
geometry or material.
The main failure modes and associated limit states are listed in the Annex. Limit states are
classified as either ultimate or serviceability:
an ultimate limit state is defined as a structural condition (of the component or vessel)
beyond which the safety of personnel could be endangered
a serviceability limit state is defined as a structural condition (of the component or
vessel) beyond which the service criteria specified for the component are no longer
met.
173
A principle is defined as a general statement, definition, or requirement for a given failure
mode for which there is no alternative (unless specifically stated otherwise). Generally, the
principle is stated to ensure that limit states are not exceeded, by requiring that (for example)
the:
combination of the design actions does not exceed the design resistance
design effect of the design actions does not exceed the design resistance.
The application rule enables the principle to be effected quantitatively by relating the design
actions, or effects, to the design resistances. The design check is simply the assessment of a
component for a load case by means of an application rule.
An action is a physical influence that causes stress and/or strain in a component; it connotes
application. An effect is the response of a component to an action, i.e. a manifestation of an
action. Design actions and effects are built up from a combination of characteristic values of
actions and partial safety factors. Generally, design effects are a function of design actions
and dimensions. Design resistances are comprised of characteristic values of resistances and
partial safety factors. Characteristic values are stated to be upper or lower fractiles of statistical
distributions of the parameter concerned, or reasonably foreseeable values that envelope those
parameters.
To put this symbolically:
A A
A D
=
where A
D
= the design action

A
= the partial safety factor applicable to the particular action
A = the characteristic value of the action.
( ) ... , a , A E E
D D D
=
where E
D
= the design effect
a
D
= dimension(s).
R
D
R
R

=
where R = the characteristic resistance

R
= the partial safety factor corresponding to the resistance.
In general the principle is effected in such a way that:
D D
D D
R E
or / and R A


174
2.3.3 Stress Categorisation route for DBA
The stress categorisation route represents a specific application of the DBA route as it may be
applied to stress analysis. The method involves the interpretation of stresses calculated at any
point in any part of a vessel or component, followed by verification of their admissibility by
means of appropriate assessment criteria.
What are termed elementary stresses are determined first. They are computed using linear-
elastic stress analysis methods; these stresses are then decomposed into membrane and bending
components and classified into a number of different categories. Once categorised, sums for
simultaneous load cases are obtained, and turned into equivalent stresses for static strength
calculation purposes. Ranges of equivalent stresses are determined for fatigue calculation
purposes.
In cases of static stress, the principle is that each of the categories of equivalent stress is less
than or equal to a factor times the nominal design stress of the material. In this context, the
stress categories can be interpreted as effects, and the factored nominal design stress as
resistance under the DBA philosophy. Simple inequalities are used for the application rule in
the design checks. Symbolically, this route may be expressed as:
( ) f F
R E
CATEGORY
eq
D D


where (
eq
)
CATEGORY
= the category of equivalent stress concerned
F = the factor on nominal design stress corresponding to the stress
category
f = the nominal [material] design stress.
It is important to emphasise that the factor F is not a partial safety factor on resistance. This
factor takes values between unity and three, and the intention in using it appears to be to allow
for non-linear elasto-plastic behaviour in the results of a linear-elastic analysis. A partial safety
factor on resistance is allowed for in the definition of the nominal design stress, which would
have the same value as would be used in the DBF approach (see above).
175
3. BASIC DESIGN CRITERIA AND PARAMETERS
3.1 PREAMBLE
This section of the overview considers the basic design criteria and parameters set out in the
Draft Standard. Items covered include loadings (pressure, temperature and combinations
thereof), pressure vessel thickness, the nominal design stress of the pressure vessel material, and
weld aspects.
3.2 LOADINGS
3.2.1 Pressure
The maximum allowable pressure (not to be confused with the maximum permissible pressure
P
max
, see Subsection 2.2, above) is denoted by PS and is defined at a specified location within
the vessel, or compartment of the vessel, under design. This is at the location of connection of
protective and/or limiting devices (pressure relief valves). For internal pressure, PS
(corresponding to normal operating conditions) is not to be less than:
the pressure at that location when the pressure release device starts to actuate
the maximum pressure that can be attained in service at the same specified location
where this pressure is not limited by a relieving device.
The design pressure is denoted by P
d
and is taken to be not less than PS.
3.2.2 Temperature
The design temperature t
d
is to be taken as not less than the maximum fluid temperature
corresponding to P
d
.
3.2.3 Pressure-Temperature Combinations
There may be more than one set of coincident design pressures and temperatures. The
calculation pressure P is to be based on the most severe condition of coincident pressure and
temperature. The calculation temperature t is to be not less than the actual metal temperature
expected in service, or the mean wall temperature where the through-thickness variation in
temperature is known. The calculation temperature is to include an adequate margin (which is
not specified in the Draft Standard) to cover uncertainties in temperature prediction.
3.2.4 Implied Partial Safety Factor on Pressure/Temperature in DBF
As will be shown in the Sections 4, 5 and 6, below, DBF uses the calculation pressure P in
determining thickness and other dimensions. The Draft Standard, therefore, does not specify
any partial safety factor on pressure/temperature for DBF and hence a partial safety factor of 1.0
is implied.
3.3 PRESSURE VESSEL THICKNESS
3.3.1 Overall Thickness and Possible Source of Uncertainty
With regard to corrosion, erosion and protection, the Draft Standard gives a number of member
thickness definitions:
176
e = the required thickness, as may be calculated from a pressure containment
criterion via DBF
c = the corrosion/erosion allowance (see Subsection 3.3.3,below)

e
= the absolute value of the negative tolerance on the thickness (taken from
the material standard)

m
= the allowance for possible thinning during the manufacturing process
(e.g. after plate is rolled).
These basic thicknesses (including the corrosion allowance) combine to make up the nominal
and analysis thicknesses as set out below.
3.3.2 Corrosion Allowance
In all cases where reduction of the wall thickness is possible as a result of surface corrosion or
erosion (of either or both surfaces), additional thickness sufficient for the design life of the
vessel components is to be provided. This thickness is to be not less than 1mm, except in
special cases.
A corrosion allowance is not required when the pressure vessel walls can be inspected
adequately from both sides, or erosion can be excluded, or the materials used for the pressure
vessel walls are corrosion resistant relative to the content, or are reliably protected.
3.3.3 Nominal Thickness
The nominal wall thickness of the pressure vessel, denoted by e
n
, is made from various
thicknesses as follows:
e n
c e e + + + =
where = the extra thickness to make up to the nominal thickness, i.e. to a standard
manufacturers product thickness.
3.3.4 Analysis Thickness
The analysis thickness, denoted by e
a
, is made up from various thicknesses, as follows:
e e
a
+ =
Thus, the analysis thickness is that assumed to be available to resist the loadings (primarily
internal pressure) after manufacture. It is to be noted that is does not include the corrosion
allowance, or the negative tolerance on thickness. The analysis thickness is used in rating
calculations via the DBF route (see Subsection 2.2, above).
3.4 NOMINAL DESIGN STRESS
The nominal design stress is the term used to denote the steel material strength f to be used in
DBF. In the Draft Standard it is taken to depend on a number of factors, as follows:
ambient temperature of the steel at the conditions that are the subject of the design
177
whether design or testing conditions are the subject of the design
steel type and/or properties, including minimum rupture elongation A.
The nominal design strength is determined from a characteristic property of the stress-strain
curve of the steel (as specified in the material standard):
minimum upper yield strength R
eH

0.2% proof strength R
p0.2

1.0% proof strength R
p1.0

minimum tensile strength R
m

divided by a factor of safety. A further multiplying factor of 0.9 is applied to steels under the
design conditions for testing Category 4 vessels. The values to be adopted for nominal design
stress are summarised in Table 3.1.
Steel Type
Minimum
Rupture
Elongation A
Design Conditions
1, 2, 3
Testing Conditions
2, 3

Steels other
than austenitic

|
|
.
|

\
|
4 . 2
R
,
5 . 1
R
min
20 / m
t / 2 . 0 p

05 . 1
R
test
t / 2 . 0 p

>= 30%
5 . 1
R
t / 0 . 1 p

05 . 1
R
test
t / 0 . 1 p

Austenitic
steels
>= 35%
.
|

\
|
0 . 3
R
,
2 . 1
R
min or
5 . 1
R
t / m
t / 0 . 1 p t / 0 . 1 p

.
|

\
|
0 . 2
R
,
05 . 1
R
max
test test
t / m t / 0 . 1 p

Steel castings
|
|
.
|

\
|
0 . 3
R
,
9 . 1
R
min
20 / m
t / 2 . 0 p

33 . 1
R
test
t / 2 . 0 p

1
for testing category 4, nominal stress is multiplied by 0.9
2
minimum upper yield strength R
eH
may be used instead of R
p0.2

3
subscript
.../t
refers to material property at temperature t.
Table 3.1 Nominal Design Stress f for use in DBF

3.5 WELDS
3.5.1 Governing Welded Joints
In designing for the thickness of certain welded components, a correction to the nominal design
stress is made for what are termed governing welded joints. This correction is referred to as a
weld joint coefficient, denoted as z, and its values are given in the next subsection.
178
Table 3.2 gives examples of joints that are classified as governing welded and non-governing
welded.
Governing Welded Joints Non-governing Welded Joints
longitudinal or helical welds in a
cylindrical shell
longitudinal welds in a conical
shell
any main weld in a spherical
shell/end
main welds in a dished end
fabricated from two or more
plates
circumferential welds between a
cylindrical or conical shell and a
cylinder, cone, flange, or ends
other than hemi-spherical
welds attaching nozzles to shell
welds
welds subjected exclusively to
compressive stress
Table 3.2 Examples of Governing and Non-governing Welded Joints

3.5.2 Weld Joint Coefficient
The weld joint coefficient acts as a multiplier on nominal design strength (see Subsection 3.4,
above) as used in the DBF calculations for wall thickness or rating. The value of the coefficient
depends on the testing group of the pressure vessel, i.e. the stringency of testing of the vessel,
from high (group 1) to low (group 4). The values for the weld joint coefficient are set out in
Table 3.3, below.
Testing Group 1 & 2 3 4
Weld Joint
Coefficient z
1.0 0.85 0.70
Table 3.3 Weld Joint Coefficient and Corresponding Test Group
In parent metal, away from principal welded joints, a value of z = 1.0 should be used. For
designing for exceptional and testing conditions, a value of unity should be used irrespective of
the testing group.

179
4. SHELLS UNDER INTERNAL PRESSURE
4.1 PREAMBLE
This section deals with the principle types of vessel used for pressure containment as they are
covered by the Draft Standard:
cylindrical and spherical shells
rectangular section pressure vessels.
The geometry of cylindrical and spherical shells is as their name suggests. Rectangular section
pressure vessels are of box-type construction from stiffened plating.
4.2 CYLINDRICAL & SPHERICAL SHELLS
4.2.1 General
Simple membrane theory formulae are used for the design of the bulk thickness of the
pressure vessel, that is to say the part of the vessel away from openings etc. where
reinforcement would be required.
The thickness to outside diameter of shells designed by this method is limited to 0.16. In the
cases of both cylindrical and spherical shells, formulae are given for determining the minimum
required thickness (given other dimensions and nominal design stress), or for rating the vessel
(determining the maximum permissible pressure P
max
given all the dimensions and nominal
design stress).
4.2.2 Cylindrical Shells
The required thickness is given by:
(

+
=
P z f 2
D P
,
P z f 2
D P
MAX e
e i

where D
i
and D
e
are the internal and external diameters of the vessel, respectively.
The maximum permissible pressure is given by:
e i
a
max
D D
e z f 4
P
+
=
The limitation on shell slenderness given in Subsection 4.2.1, above limits the formulae to the
following pressures:
z f 381 . 0 P , P
max

4.2.3 Spherical Shells
The required thickness is given by:
180
(

+
=
P z f 4
D P
,
P z f 4
D P
MAX e
e i

The maximum permissible pressure is given by:
e i
a
max
D D
e z f 8
P
+
=
The limitation on shell slenderness given in Subsection 4.1.1, above limits the formulae to the
following pressures:
z f 762 . 0 P , P
max
=
4.3 PRESSURE VESSELS WITH RECTANGULAR SECTION
4.3.1 Types
Pressure vessels with rectangular cross-sections have rounded corners. They may be divided
into two main types:
unreinforced vessels, and
reinforced vessels.
Unreinforced vessels, in turn, may have a central stay within the interior connecting the longer
sides of the vessel. This is present to reduce the bending stresses in the longer sides of the
cross-section.
Reinforced vessels have a continuous frame that may either follow the contour of the vessel or
form a closed rectangle. The reinforcing members are fitted to the outside of the vessel in a
plane perpendicular to the vessels long axis, so as to circumscribe the cross-section. The
reinforcing members themselves can take a variety of forms from U-sections, angles tees, to
plate assemblies.
4.3.2 Unreinforced Vessels
For vessels without a stay, and where the thicknesses of the long and short sides of the cross-
section are the same, the Draft Standard gives straightforward formulae for the membrane and
bending stresses (
m
and
b
, respectively) at five essential locations. These are at the:
mid-point of the short side
junction between the short side and the corner quadrant transition between the long and
short sides
junction between the long side and the corner quadrant transition between the long and
short sides
mid-point of the long side, and
within the quadrant corner.
The membrane stresses are the equivalent of the circumferential stresses in the case of circular
cylindrical vessels.
181
Where the vessel has a stay, in the form of a partition plate, the formulae relate to the following
locations:
mid-point of the short side (membrane and bending)
junction between the short side and the corner transition between the long and short
sides (membrane and bending)
junction between the long side and the corner quadrant transition between the long and
short sides (membrane and bending)
junction between the mid-point of the long side and the partition plate (membrane and
bending in the long side, membrane in the partition plate).
Allowable stresses for an unreinforced vessel are set by limiting the membrane stress and the
sum of the membrane and bending stresses in the following manner:
z f 5 . 1
z f
b m
m
+


Generally, the stresses can be expressed as:
( )
( ) vessel of geometry other F
e
P
vessel of geometry other F
e
P
b
2
b
m m
=
=

Hence, these formulae can be manipulated to compute required thickness, or to rate the pressure
vessel given an analysis thickness.
4.3.3 Reinforced Vessels
The design process for reinforced vessels is more complicated than unreinforced ones. Stress
checks have to be performed on:
shear stresses at the junctions between reinforcing members and the walls of the vessel in order
to size welds,
cross-section-wise membrane, bending and shear stresses in reinforcing member elements, and
longitudinal membrane and bending stresses in vessel walls between or inside reinforcing
elements.
Limitations on membrane and bending stresses are set as in Subsection 4.2.2, above. Shear
stresses are restricted to the following:
f 5 . 0
182
5. SHELL ENDS
5.1 PREAMBLE
This section relates to the types of end closures used for cylindrical pressure vessels. The Draft
Standard deals with four main types:
dished ends (which may be hemispherical or torispherical)
cones or conical ends
flat ends (which are generally flat plates, welded to the adjoining cylindrical shell or
bolted via a flange), and
bolted domed ends (comprising a spherical portion, with a peripheral ring-beam to
effect the connection to the adjoining cylindrical shell via bolting to a flange).
5.2 DISHED ENDS
5.2.1 Hemispherical Ends
The required thickness for a hemispherical end is determined using the same equations as used
to design a spherical pressure vessel (see Subsection 4.1.3, above). The wall thickness of the
adjoining cylindrical shell must be kept to the minimum requirement set out in Subsection 4.1.2,
above, up to the junction between the cylinder and the end. Given that, for a given pressure and
outside diameter, the thickness required for a spherical shell will be less than that for a
cylindrical shell, some tapering of the wall thickness in the end will be necessary.
5.2.2 Torispherical Ends
A torispherical arrangement allows the outside diameter of the spherical end to be larger than
that of the adjoining cylinder. The part-toroidal portion of the end allows the transition between
the cylindrical shell and the part-spherical end to be made, and is referred to as the knuckle.
Formulae are given to determine the required thickness of the toroidal and spherical parts, but
these are only applicable if certain geometrical limitations are observed:
e
e e
i i
D R
D 08 . 0 e D 001 . 0
e 2 r
D 2 . 0 r D 06 . 0



where r = inside radius of the toroidal part
e = required thickness of the toroidal part.
In addition, if e 0.003D
i
, then the formulae are limited in application to carbon steel and
austenitic stainless steel ends with a design temperature 100
o
C.
The required knuckle thickness, e, is given by:
183
( )
( )
(
(
(

|
|
.
|

\
|
|
.
|

\
|
+
+

=
3
2
825 . 0
i
b
i
i
r
D
f 111
P
D 2 . 0 R 75 . 0 ,
f
D R 75 . 0 P
,
P 5 . 0 z f 2
R P
MAX e
The first term in this expression is the thickness requirement for the spherical part of the end,
and generally the required wall thickness of the toroidal part will be between this and that of the
adjoining cylindrical shell. The parameter is obtained graphically in the code, and depends on
the relative geometries of the toroidal and the spherical parts
The term f
b
is different from the nominal design stress and accounts for the use of stainless steel
ends where if they are not cold spun f
b
will be less than f. If the stainless steel ends are cold
spun, the value of f
b
is enhanced by a multiplying factor of 1.6 to allow for the strain hardening
property of the material. It is permissible to reduce the thickness of the spherical part to its
minimum value required over an area of the end as long as the periphery of this area does not
come closer to (R e) of the knuckle.
The formulae given above for thickness can be re-arranged to give the corresponding rating
formulae.
5.3 CONES & CONICAL ENDS
5.3.1 Conical Shells
Simple formulae for conical shells are given to either size the thickness, or rate the shell for a
given geometry. These formulae are as follows:
( ) ( )
(

+
=
cos
1
.
P z f 2
D P
,
cos
1
.
P z f 2
D P
MAX e
e i

where, in this equation, is the semi-angle at the apex of the cone, which must be greater than
60
o
for the formulae to apply. It should be noted that the required thickness will vary with
location within the cone, because the inside and outside diameters change with respect to the
cones axis. Also:
( )
001 . 0
D
cos e
c


where D
c
is the mean diameter of the cylinder at the cylinder/cone junction. Such junctions may
be direct in the sense having no transition between the cone and the adjoining cylindrical shell,
or may have a toroidal knuckle.
The rating formula is:
( )
e i
a
max
D D
cos e z f 4
P
+

=
5.3.2 Cone/shell Junctions
Consideration of junctions involves three thicknesses:
184
the cylinder thickness
the cone thickness
the required junction thickness.
The required junction thickness is computed from formulae provided in the Draft Standard. In
the event that the junction thicknesses e
1
and e
2
are greater than the body thicknesses of the
cylinder and cone, respectively, then the larger thickness must be maintained for specified
distances along the cylinder and cone.
Considering junctions that are not offset with respect to their axes of rotation, three types are
possible:
junction between the large end of a cone and a cylinder without a knuckle
junction between the large end of a cone and a cylinder with a knuckle
junction between the small end of a [truncated] cone and a cylinder.
The first of these only is described in detail below.
The required thickness of the cylinder adjacent to the junction between the large end of a cone
and the cylinder without a knuckle, e
1
, is determined from:
(


=
f 2
D P
, e MAX e
c
c 1

where e
c
= the thickness of the main body of the cylindrical shell
=
( )
( )
15 . 0
cos
1
1
tan
e
D
3
1
1
c


It is evident that the computation involving e
1
is iterative.
The required thickness of the cone adjacent to the junction, e
2
, is determined from:
(


=
f 2
D P
, e MAX e
c
2

where e is the required cone thickness (see Subsection 5.2.1, above).
When a knuckle is used, the inside radius of the toroid is limited to not greater than 0.3 D
c
.
Determination of the required knuckle thicknesses in this case involves a quite complex iterative
calculation.
The computation for the junction between the small end of a cone and a cylinder is similarly
complicated.
185
5.4 FLAT ENDS
5.4.1 Types
There are three main categories of flat ends:
unpierced circular, welded to the adjoining cylindrical shell
unpierced circular, bolted to the adjoining cylindrical shell
pierced flat ends, either welded or bolted to the adjoining cylindrical shell.
5.4.2 Unpierced Circular Flat Ends Welded to Cylindrical Shells
This category can be further subdivided into ends with or without a hub. A hub is a toroidal
segment that rounds-off an otherwise sharp corner and hence allows butt welds between the
cylinder, hub and flat end. Without a hub, fillet welds are necessary to connect the cylindrical
shell directly to the flat end, unless a relief groove is used.
The minimum required thickness for a uniform flat end with a hub is given by:
( )
f
P
r D C e
i 1
=
where r = inside radius of the hub
D
i
= inside diameter of the adjoining cylindrical shell
C
1
= a coefficient that is a function of P/f and e
s
/D
i
obtainable
graphically
e
s
= the wall thickness of the adjoining circular cylinder.
For a fillet welded flat end without a relief groove, the minimum required thickness depends on
whether a normal operating, exceptional operating, or hydrostatic testing case is being
considered.
For the exceptional operating and hydrostatic testing cases the required thickness is computed
from the formula given above, with r set to zero, and the test pressure and nominal design stress
used.
For a normal operating case the following applies:
(

=
min
i 2 i 1
f
P
D C ,
f
P
D C MAX e
where C
2
= a coefficient that is a function of P/f
min
and e
s
/D
i
obtainable
graphically
f
min
= is the minimum value of the nominal design stress of the flat end,
or the nominal design stress of the cylindrical shell at the
calculation temperature of the shell, i.e. MIN[f, f
s
].
186
Where the flat end has a relief groove, the minimum required thickness is determined at the
bottom of the groove according to:
(

|
.
|

\
|
=
f
f
e , e MAX e
s
s s

5.4.3 Unpierced Circular Flat Ends Bolted to Cylindrical Shells
This category can be further subdivided into ends with a narrow-faced gasket or with a full-
faced gasket. Narrow-faced gaskets lie inboard of the bolts connecting the end to the cylindrical
shell. Full-face gaskets, on the other hand, have bolt holes penetrating them and thus provide a
seal on both sides of the bolts.
The required thicknesses are derived from formulae similar to those cited above for welded
ends, insofar that they involve the squareroot of P/f. However, they also involve the various
geometries associated with bolt pitch circle diameter and gasket reaction diameter.
For example, in the case of a flat end with a full-face gasket, the required thickness is
determined from:
f
P
C 41 . 0 e =
where C is the bolt pitch circle.
5.4.4 Pierced Circular Flat Ends
Openings and nozzles in flat ends are dealt with using modification factors applied to the
thicknesses computed using the formulations for unpierced ends, given above. These
modification factors increase the required thickness. They are functions of the diameters of the
openings (or the equivalent diameters in the case of nozzles) and the closeness of the
openings/nozzles either to each other or to the periphery of the flat end.
5.5 BOLTED DOMED ENDS
5.5.1 Types
Bolted domed ends comprise a part-spherical end shell welded to a circumferential ring beam at
its periphery. The ring beam, in turn, is bolted to the adjoining cylindrical shell via a flange.
Such ends may have narrow-faced, or full-faced gaskets (see Subsection 5.3, above) and the
dome component can be oriented to be either concave or convex to pressure.
The design of bolted dome ends involves determination of the required thickness of the
spherical dome part, and determining the various connecting forces transferred through the ring-
beam, including the design bolt loads.
5.5.2 Dome Thickness
The dome thickness is independent of end type, and is given by the following simple formula in
cases where the dome is concave to pressure:
d
f 6
R P 5
e =
187
where R = inside radius of dome
f
d
= nominal design stress of the dome.
Where the dome is convex to pressure, the potential exists for instability and the dome has to be
treated according to the clause that relates to shells under external pressure.
188
6. OPENINGS
6.1 PREAMBLE
Openings are provided in pressure vessels to enable access to the interior of the vessel, or for
ingress and egress of contained fluids or gases via nozzles or branches. Nozzles may be either
set-in (where they penetrate the wall of the shell and are fillet welded both to the inside and
outside of the shell) or set-on (where they are welded to the outside of the shell only).
A shell containing an opening must be adequately reinforced in the area adjacent to the opening.
This is in order to compensate for the reduction of pressure-bearing section because of the
opening. The reinforcement can be effected by:
thickening the shell around the opening
using a reinforcing plate
using a reinforcing ring
using a suitably sized wall thickness for the nozzle
using a combination of these methods.
A reinforcing plate is fillet welded to the shell, whereas a reinforcing ring is set-in in a similar
manner to a set-in nozzle.
The location of an opening is also an important factor, whether it:
encroaches within a knuckle region in an end (see Section 5, above)
lies within the general body of a shell
lies within the general body of a rectangular section pressure vessel.
Openings within flat ends have been discussed briefly in Subsection 5.4.4, above, where the
predominant concern is bending action in the end plate. The discussion below relates mainly to
dealing with openings where membrane action predominates.
6.2 NOZZLES WHICH ENCROACH INTO THE KNUCKLE REGION
Rules are provided for increasing the thickness of a dished end to compensate for nozzles that
are not entirely within the central area of the head, and which are not covered by the
requirements for openings in shells dealt with in Subsection 6.3, below.
In essence, the rules are restricted to nozzles that are placed within the knuckle region of two
specific types of torispherical ends:
Kloepper, and
Korbbogen
and subject to specific geometric restrictions in respect of these two types.
189
The clauses in the standard provide complex iterative formulae, supported by graphs, to
determine alternative values of to be used in the thickness, or rating, formulae for knuckle
thickness given in Subsection 5.2.2, above.
6.3 OPENINGS IN SHELLS
6.3.1 General
In the Standard, openings are treated as either:
isolated
multiple, or according to whether they are
close to a shell discontinuity, such as a junction with a flange, or with a conical
reducer.
6.3.2 Isolated Openings
Simple detailing rules, involving hole sizes, proximity distances between adjacent openings, and
openings and boundaries are used to determine if an opening can be classified as isolated.
Rules are used to determine the necessary areas of reinforcement. These are simple inequalities
where resistance must exceed load; resistance being the sum of products of resistance cross-
sectional areas and nominal design strengths, and load is the sum of products of pressure loaded
cross-sectional areas and pressure. Distinction is made between openings reinforced by a ring
and all other types. In general terms, this can be expressed as:
( )( ) | | ( ) | | ( )
| | ( ) ( )

+ + + +
+ + + +
Ap Ap Ap Ap P P 5 . 0 f , f MIN Af
P 5 . 0 f , f MIN Af P 5 . 0 f , f MIN Af P 5 . 0 f Af Af
b r s b s b
r s r p s p s w s

If appropriate, and rules are provided for determining this, contribution to resistance can be
made by the following components:
shell A
fs
, nominal design strength f
s

welds A
fw
, nominal design strength f
s

reinforcing plate A
fp
, nominal design strength f
p

reinforcing ring A
fr
, nominal design strength f
r

reinforcing effects of a nozzle A
fb
, nominal design strength f
b
.
The contributions to the pressure loading stem from the opening created by combinations of:
shell Ap
s

reinforcing ring Ap
r

nozzle Ap
b

oblique nozzle Ap

.
Not all of the resistance and pressure components will appear in a particular situation; plate and
ring components will not be present together in the same equation. A rating equation can be
derived from the above, generalised, equation for specific cases.
190
6.3.3 Multiple Openings
Multiple openings are dealt with via a ligament check and an overall check.
A ligament check is not required under certain conditions of geometry involving the nozzle
diameters and thicknesses. If required, a ligament check is carried out on pairs of openings. It
involves the material between the centrelines of the two openings in a resistance cross-sectional
areas and pressure cross-sectional areas calculation of the type described above for isolated
openings.
In the event that the ligament check is not satisfied, an overall check is made. This is again
applied to pairs of openings; it involves extending the areas available for the design calculation
to material that lies outwith the centrelines of the two openings. The Standard provides
formulae by means of which the dimensions of the reinforcement included in the calculation are
to be determined.
6.3.4 Openings Close To A Shell Discontinuity
Openings close to a shell discontinuity are dealt with by detailing rules that specify the
minimum distance w
min
required between the opening and the discontinuity. The rules cover
openings in cylindrical shells, conical shells, domed and bolted ends, and in elliptical and
torispherical ends.
Moreover, if the distance of an opening from a discontinuity is lower than a further value, w
p
,
determined from simple formulae, then limits are set on the lengths of reinforcement available
for the types of calculation described in Subsections 6.3.2 and 6.3.3, discussed above.
6.4 OPENINGS IN PRESSURE VESSELS WITH RECTANGULAR SECTION
These types of vessels are generally stiffened plate types of construction. The standard sets
geometrical limitations to the types of openings that the simplified equations provided are
applicable to:
rounded corners
aspect ratio 2
diameters of opening 0.8 clear spacing between reinforcing elements.
Sufficient width of ligament between the edge of an opening and the side of the vessel must be
provided. Criteria involving shear stresses, areas of plating without and with opening between
stiffening elements, and nominal design stress are used to determine whether reinforcement of
the opening is required.
If it is the case that reinforcement is required, then a simple formula, involving opening
geometry, membrane and bending stresses, and nominal design stress, is provided to determine
the area of reinforcement required.
191
7. FATIGUE
7.1 PREAMBLE
The Draft Standard provides two major clauses on fatigue assessment. These are referred to as:
simplified assessment of fatigue life
detailed assessment of fatigue life.
The simplified assessment relates to pressure fluctuations only, and is stated to be based on
conservative assumptions. More precise, less conservative results will be obtained in pressure
cases via the use of the detailed assessment. Moreover, the simplified fatigue assessment only
applies to components designed according to the provisions of the standard that relate to DBF.
Detailed assessment is intended to deal with other cyclic loads; due to temperature variations
during operation for example, or for components designed according to the provisions of DBA.
7.2 SIMPLIFIED FATIGUE ASSESSMENT
7.2.1 Overall Procedure
Simplified fatigue assessment follows a fairly standard procedure. A spectrum of stress ranges
is used in conjunction with S-N curves for standardised details, along with the Palmgren-Miner
cumulative damage rule to determine the damage index over the life of the stress range
spectrum.
7.2.2 Stress Ranges
For an individual component within the pressure fluctuation spectrum P the pseudo-elastic
stress range is calculated first, from the following formula:
f
P
P
max

=
is a factor that is obtained from tables, and depends on the component type (shell, opening,
flat end, and so forth), z, and weld type. The value of nominal design stress to be used is taken
as that at the calculation temperature.
Corrections to the pseudo-elastic stress range are made to account for the following factors:
thickness
temperature
notch effect
elastic-plastic cycle conditions.
The thickness correction factor f
e
takes account of wall thicknesses which lie in the range 25
150mm, and is given by:
192
25 . 0
e
e
25
f
|
.
|

\
|
=
The temperature correction factor
*
t
f is applied in situations where the temperature is greater
than or equal to 100
o
C. It depends on the material type: ferritic or austenitic steels, and is a
function of the mean cycle temperature.
To account for notch effects in unwelded regions, only where these are significant, an effective
stress concentration factor K
f
is used. This must be determined from an implicit equation, as
follows:
( )
D
f
f
f
K 5 . 0 1
1 K 5 . 1
1 K


+

+ =
where
D
= is the endurance limit of Class UW (see below).
Where is greater than 3f the stress range is increased by a factor K
e
to account for elastic-
plastic conditions, as follows:
|
|
.
|

\
|


+ = 1
R 2
A 1 K
*
t / 2 . 0 p
0 e

where A
0
= 0.5 for ferritic steels with 800 R
m
1000MPa
=
( )
3000
500 R
4 . 0
m

+ for ferritic steels 500 R


m
800MPa
= 0.5 for ferritic steels with R
m
500MPa and all austenitic steels
for austenitic steels,
*
t / 2 . 0 p
R is replaced with
*
t / 0 . 1 p
R .
The fictitious stress range
*
depends on whether an unwelded region or a welded connection
is being considered, as follows:
e
t
e
*
e f
t
e
*
K
f f
connection welded
K K
f f
region unwelded
*
*
|
|
.
|

\
|

=
|
|
.
|

\
|

=

7.2.3 Fatigue Design Curves and Joint Classification
The fatigue design curves provided in the standard take the normal form for welded
components:
193
m
1
*
N
C
|
.
|

\
|
=
Composite log-log curves are used for constant and variable amplitude loading:
linear plus endurance limit
D
for constant amplitude
bi-linear plus cut-off limit
Cut
for variable amplitude.
The parameters of the fatigue design curves for welded connections depend on the class as set
out in Table 7.1.
Fatigue Curve Parameters
For N 5 x 10
6
For N > >> > 5 x 10
6

Class
Endurance
Limit

D

(MPa)
Cut-off
Limit

Cut

(MPa) C m C m
90 66.3 36.4 1.46 x 10
12
3 6.14 x 10
15
5
80 58.9 32.4 1.02 x 10
12
3 3.56 x 10
15
5
71 52.3 28.7 7.16 x 10
12
3 1.96 x 10
15
5
63 46.4 25.5 5.00 x 10
11
3 1.08 x 10
15
5
56 41.3 22.7 3.51 x 10
11
3 5.98 x 10
14
5
40 29.5 16.2 1.28 x 10
11
3 1.11 x 10
14
5
32 23.6 12.9 6.55 x 10
10
3 3.64 x 10
13
5
Table 7.1 Parameters to be Used in Fatigue Design Curves for Welded Connections
In treating unwelded regions (Class UW), the equations and parameters to be used are as shown
in Table 7.2.
Fatigue Curve Parameters
For N > >> > 2 x 10
6

Class
Endurance
Limit

D

(MPa)
Cut-off
Limit

Cut

(MPa)
For N 2 x 10
6
C m
UW 172.5 116.7
140
N
46000
*
+ =
4.67 x 10
28
10
Table 7.2 Parameters to be Used in Fatigue Design Curves for Unwelded Regions
Standard welded joints are classified according to a table provided in the Standard and assigned
to a class number in Table 7.1. This then indicates which fatigue curve needs to be used for the
joint under design. The fatigue design curve is used to determine the number of cycles to failure
for the fictitious stress range under consideration.
194
7.2.4 Assessment Rule
Each element of the spectrum of stress ranges is treated in the manner outlined above, and the
cumulative damage is computed. The design is acceptable if the total fatigue damage index D is
as follows:

=
i
i
i
N
n
D , 0 . 1 D
where n
i
= the number of cycles at stress range
*
i

N
i
= the number of cycles to failure from fatigue design curve at stress
range
*
i

7.3 DETAILED FATIGUE ASSESSMENT
The detailed fatigue assessment route is similar to the simplified assessment procedure in broad
outline. The main differences lie in the fact that consideration is given to principal stresses, and
dealing with situations in which principal stress directions remain constant or vary during the
course of a loading cycle. What are termed equivalent stress ranges are used in the first
instance (based on Tresca maximum principal stress difference), and principal stress ranges
are used in the second. The latter require careful consideration in their computation, and in
dealing with varying direction within the component under design. Fatigue classes are assigned
to details for separate cases of equivalent stress range or principal stress range.
A number of correction factors to stress range can be applied in principle, as follows:
thickness (welded connections and unwelded regions)
temperature (welded connections and unwelded regions)
notch effect (unwelded regions)
elastic-plastic cycle conditions (unwelded regions)
surface finish (unwelded regions)
mean stress effect (unwelded regions).
195
8. DIRECT ROUTE FOR DBA
8.1 PREAMBLE
As indicated above, the direct route for DBA appears to be a limit state, partial safety factor
approach. The general principles have been dealt with in Subsection 2.3, above, and the
purpose of this section is to concentrate on some of the details relevant to those principles.
As stated above, an application rule is used to verify a principle within a design check. Central
within this process is the part of DBA wherein design actions (or effects) are analysed and the
results compared against design resistances. The latter would be chosen as appropriate to the
analysis carried out, or the type of analysis chosen would be appropriate to the required design
check. Thus having carried out Steps (a) to (c) as set out in Subsection 2.3.2, above, this
process of the design check is a series of sub-steps within Step (d) that amounts to:
1. Define the load case and specify the actions.
2. Determine the characteristic value of each action and calculate the design value of each
action by multiplying the characteristic value by the appropriate partial safety factor.
3. Calculate the design effect of the design actions via an analysis. This is rather abstruse
terminology, but it is judged that an effect can be interpreted as some representative
response characteristic of the subject of the analysis load-deflection curve,
distribution of stresses or stress resultants, and so on, determined from the analysis
structural, stress, collapse, and so on.
4. Calculate the design resistance of the component, which will be a function (or
functions) of characteristic resistance and partial safety factors. The choice of design
resistance will be in keeping with the analysis carried out in order to facilitate the
design check.
5. Determine whether or not the principle is satisfied.
Annex B of the Draft Standard does not specify analysis methods to be used, but concentrates
on provisions for characteristic values, partial safety factors, limit states and some of the design
checks. These are discussed in the further subsections, below.
8.2 CHARACTERISTIC VALUES
8.2.1 Actions
Actions are classified into the following four types:
permanent
variable (other than temperature and pressure, and any actions related to them
deterministically, i.e. not probabilistically)
exceptional
temperature and pressure, and any actions related to them deterministically, i.e. not
probabilistically
196
Exceptional actions correspond in general to a non-mandatory design condition that relates to
events of very low occurrence probability requiring the safe shut-down and inspection of vessel
or plant. Examples may be consideration of secondary containment due to primary containment
failure, explosion or other accidental scenarios, and earthquake.
It is to be noted that pressure/temperature are not considered under the category of variable
actions. They should be considered to act simultaneously, because they are strongly correlated,
and their interdependence defined appropriately.
Characteristic values of the actions are set according to the rules summarised in Table 8.1.
Action
Coefficient of
Variation
(CoV)
(1)

Symbol
(2)
Characteristic Value
0.1
G
k
Mean of extreme values
> 0.1
(3)

G
k,sup
Upper limit with 95% probability of not
being exceeded
(4)

Permanent
G
k,inf
Lower limit with 95% probability of being
exceeded
(4)

0.1
Q
k
Mean of extreme values
Variable
> 0.1
97% percentile of extreme value in given
period
(5)

Exceptional Specified individually
P
sup
Reasonably foreseeable highest pressure
(6)

T
sup
Reasonably foreseeable highest
temperature
P
inf
Reasonably foreseeable lowest pressure
(7)

Pressure &
Temperature
T
inf
Reasonably foreseeable lowest temperature
(1) mean of extreme values may also be used when difference between reasonably
foreseeable highest and lowest values is not greater than 20% of mean
(2) subscript k indicates that there may be several actions in a load case
(3) also applies where actions are likely to vary during life of vessel
(4) highest and lowest credible values may be used in absence of statistics
(5) for bounded variable actions, limit values may be used
(6) may be set pressure of relief valve
(7) this value usually zero or 1.0 (for vacuum conditions)
Table 8.1 Characteristic Values for Different Categories of Action

8.2.2 Resistance
For the calculation of resistance, the following may be used for characteristic values:
the nominal values of geometric data, with the exception of wall thickness for which
the nominal values minus the allowances are to be used [this would tend to suggest the
use of the analysis thickness, see Subsection 3.3.4, above]
the minimum guaranteed material strength data i.e. R
eH
, R
p0.2/t
, R
p1.0/t
, R
m/t

197
for other properties, e.g. modulus of elasticity, coefficient of linear thermal expansion,
etc., nominal or mean values may be used.
8.3 PARTIAL SAFETY FACTORS
8.3.1 Actions
Partial safety factors are set as individual values separately for load cases in operation and load
cases in hydraulic test, along with combination rules and reduction factors to account for the
low probability of extreme values of actions occurring together. The partial safety factor values
are summarised in Table 8.2.
Load Cases
in:
Action Condition Partial Safety
Factor
Permanent Actions with unfavourable effect
G
= 1.35
Permanent Actions with favourable effect
G
= 1.00
Variable Unbounded variables
Q
= 1.50
Variable Bounded variables and limit values
Q
= 1.00
Pressure Without natural limit
P
= 1.20
O
p
e
r
a
t
i
o
n

Pressure With natural limit
P
= 1.00
Permanent Actions with unfavourable effect
G
= 1.35
Permanent Actions with favourable effect
G
= 1.00
H
y
d
r
a
u
l
i
c

T
e
s
t

Pressure
P
= 1.00
Table 8.2 Partial Safety Factors on Actions
The combination rules for load cases in operation are as follows:
1. All permanent design actions are to be included in each load case.
2. Each pressure design action is to be combined with the most unfavourable variable
design action.
3. Each pressure design action is to be combined with the corresponding sum of the
variable design actions; the design value of stochastic actions (see Table 8.1) may be
multiplied by the combination factor = 0.9.
The combination rules for load cases in hydraulic test are as follows:
1. All permanent design actions are to be included in each load case.
2. In cases where more than one test is applied, each pressure case is to be included.
8.3.2 Resistance
Resistance is generally taken as function of geometric properties and some suitable
characteristic material strength parameter, denoted as RM. Partial safety factors are set as
individual values separately for load cases in operation and load cases in hydraulic test, and
depend on the characteristics of the stress-strain curve. The RM and partial safety factor values
are summarised in Tables 8.3 and 8.4.
198
Steel Type RM
R

Conditions on Steel
Characteristic
25 . 1 8 . 0
R
R
20 / m
t / 2 . 0 p

Ferritic R
eH
or R
p0.2/t

|
|
.
|

\
|
20 / m
t / 2 . 0 p
R
R
5625 . 1 otherwise
Austenitic
(30% < A
5
< 35%)
R
p1.0/t
25 . 1
25 . 1
0 . 2
R
R
t / 0 . 1 p
t / m

|
|
.
|

\
|

t / 0 . 1 p
t / m
R
R 5 . 0
25 . 2 5 . 2
R
R
0 . 2
t / 0 . 1 p
t / m

Austenitic
(A
5
35%)
R
p1.0/t

00 . 1
t / 0 . 1 p
t / m
R
R
5 . 2
58 . 1
t / 2 . 0 p
20 / m
R
R
58 . 1
Steel castings R
p0.2/t

20 / m
t / 2 . 0 p
R
R
5 . 2 otherwise
Table 8.3 Resistance Parameters and Partial Safety Factors on Resistance - Load Cases in
Operation (not including design check for progressive plastic deformation, or instability)
Steel Type RM
R

Conditions on Steel
Characteristic
Ferritic R
eH
or R
p0.2
05 . 1
Austenitic
(30% < A
5
< 35%)
R
p1.0
05 . 1
05 . 1
0 . 1 p
m
R
R
905 . 1
Austenitic
(A
5
35%)
R
p1.0

m
0 . 1 p
R
R
0 . 2 otherwise
Steel castings R
p0.2
33 . 1
t / 2 . 0 p
20 / m
R
R
58 . 1
Table 8.4 Partial Safety Factors on Resistance Load Cases in Hydraulic Test
199
8.4 FAILURE MODES/LIMIT STATES
8.4.1 General
The Draft Standard lists the main failure modes and with the relevant limit states. These are
summarised in Table 8.5, and are classified according to whether the loading is short-term,
long-term or cyclic and whether loading is by single or multiple application.
Loading Type
Short-term Long-term
Failure mode Single
Application
Multiple
Application
Single
Application
Multiple
Application
Cyclic
Brittle fracture U
Ductile rupture (gross plastic
deformation)
U
Excessive deformation
(mechanical joints)
S or U
Excessive deformation
(unacceptable load transfer)
U
Excessive deformation
(service restraints)
S
Excessive local strains
(crack formation, ductile tearing)
U
Instability U or S
Progressive plastic deformation
(ratcheting)
U
Alternating plasticity U
Creep rupture U U
Creep excessive deformation
(mechanical joints)
S or U S or U
Creep excessive deformation
(unacceptable load transfer)
U U
Creep excessive deformation
(service restraints)
S S
Creep instability U or S U or S
Erosion/corrosion S S
Environmentally-assisted
cracking
U U
Fatigue U
Environmentally-assisted fatigue U
Table 8.5 Classification of Failure Modes and Limit States
(U = ultimate, S = serviceability)
200
As can be seen, some failure modes can be viewed as leading to both ultimate (U) or
serviceability (S) limit states.
The Draft Standard goes into more detail regarding a number of failure modes, these being:
gross plastic deformation
progressive plastic deformation
instability
fatigue failure
static equilibrium.
8.4.2 Ductile Rupture / Gross Plastic Deformation
In dealing with gross plastic deformation, two alternative principles are suggested for use:
the combination of design actions is less than the design resistance
the design effect(s) of the combined actions is less than the design resistance.
In either case, the analysis performed to obtain the design resistance is subject to the following
assumptions:
1. Proportional increase in all design actions.
2. First-order (deformation) theory.
3. A linear-elastic ideal-plastic material, or a rigid ideal-plastic material
4. Design strength parameter RM and partial safety factors as specified in Tables 8.3 or
8.4, above.
Corresponding application rules are suggested for each of the two principles.
For the first principle, the application rule suggested appears to involve a lower-bound limit
load technique. In such an analysis the intent would be to produce the relationship between a
global load factor (hence the use of a proportional load approach) and some suitable global
deformation parameter. The actions would be set at their design values (i.e. including partial
safety factors), and material response would be set at RM with no partial safety factor. The
objective of the analysis is to determine the limit load: either a peak load, or a load at which
excessive deformations initiate. An acceptable design would be one where the limit load was
reached at a global load factor of less than the inverse of the resistance partial safety factor.
Where no distinct limit load is forthcoming, because of load-deformation curve that
continuously rises, a lower-bound load would be obtained from curve tangents at the 5% strain
level.
For the second principle, the application rule suggested appears to involve a stress analysis
where the maximum primary equivalent stress from the analysis is less than RM divided by the
appropriate partial safety factor. (See the next section for the definition of a primary stress, as
this approach appears to be more akin to the stress categorisation route for DBA.) The factored
actions would be used in the stress analysis. As an alternative, because some computer analysis
packages would give stress resultants (bending moments, membrane forces per unit length, and
201
so forth) as their output from an analysis, resistance yield loci can be used for the design check.
It should be emphasised that these two alternatives are not equivalent, as the second one would
involve full plastification of the critical cross-section.
8.4.3 Progressive Plastic Deformation
The principle associated with this failure mode is that upon repeated application of the action,
progressive plastic deformation shall not occur. It is suggested that the analysis undertaken to
ascertain this should be subject to the following assumptions:
1. First-order (deformation) theory.
2. A linear-elastic ideal-plastic material.
3. Von Mises yield criterion and associated flow rule (maximum shear stress energy
theory), or the Tresca (maximum shear stress theory) would be permitted as it is the
most conservative of the two.
4. The partial safety factors on actions are to be taken as in Table 8.2.
5. The resistance parameters and corresponding partial safety factors are not as in Table
8.3, but are as in Table 8.6.
Steel Type RM
R

Conditions on
Steel
Characteristic
Ferritic
( ) or R R
2
1
sup inf
t / eH t / eH
+
( )
sup inf
t / 2 . 0 p t / 2 . 0 p
R R
2
1
+
0 . 1
Austenitic ( )
sup inf
t / 0 . 1 p t / 0 . 1 p
R R
2
1
+ 0 . 1
Steel castings
( ) or R R
2
1
sup inf
t / eH t / eH
+
( )
sup inf
t / 2 . 0 p t / 2 . 0 p
R R
2
1
+
0 . 1
Table 8.6 Resistance Parameters and Partial Safety Factors on Resistance Load Cases
in Operation Design Check for Progressive Plastic Deformation
Thus partial safety factors on resistance are set to unity, and RM values are the arithmetic means
of yield or proof stresses over the action cycle minimum and maximum temperatures.
The application rule for progressive plastic deformation is applied in such a way that, during
any action cycle, the locus of principal stresses, which may appear as a closed figure when
plotted in principal stress space, does not pierce the yield locus. The Draft Standard tries to
express this as a limitation on the largest diameter of this closed figure to be less than 2RM.
However this criterion, whilst accounting for the size of the principal stress locus, appears to
take no cognisance of the location of the locus within the yield locus. Notwithstanding this, the
202
implication seems to be that, in order to prevent progressive plastic deformation, no yielding
within the action cycle is permitted.
8.4.4 Instability
Instability refers to the failure mode of buckling/collapse under external pressure. The Draft
Standard suggests that the principle to be operated in this case should be based around the lower
bound of the expected range of failure pressures from experimental observations, in which the
experiments included the effects of shape deviations. Any theoretical model used to predict
buckling/collapse should be correlated with experiments.
The partial safety factors on actions are as in Table 8.2. Those on resistance are set out in Table
8.7, and depend on whether or not an external pressure is to be conducted.
Conditions
R

External pressure test carried out 1.25
No external pressure test carried
out
1.50
Table 8.7 Partial Safety Factors on Resistance for the Instability Failure Mode
The application rule for instability can be taken as the provisions of the Draft Standard as set out
in Clause 8.
8.4.5 Fatigue Failure
The principle to be employed in the case of the fatigue failure mode is that the cumulative
damage should not exceed unity. The requirements of Clause 18 of the Draft Standard are to be
used as the application rule.
8.4.6 Static Equilibrium
Static equilibrium refers to the avoidance of rigid body motion of the pressure vessel. The
principle to be employed states that the design effect of the destabilising actions should be
smaller than the design effect of the stabilising actions. The partial safety factors on actions are
to be taken from Table 8.2. Actions are to be combined in such a way as to consider the most
pessimistic combinations.
203
9. STRESS CATEGORIZATION ROUTE FOR DBA
9.1 PREAMBLE
The basic principles of the stress categorisation route for DBA have been outlined in Subsection
2.3.3, above. It represents a particular special case of DBA embodying stress analysis. A more
detailed picture is presented below. This involves dealing with various aspects of representative
stresses (elementary stresses, equivalent stresses and stress ranges), Stress decomposition (into
membrane, bending and so on), stress classification (primary, secondary and peak stresses), as
well as the procedure involved in a DBA stress categorisation route.
9.2 REPRESENTATIVE STRESSES
9.2.1 Elementary Stresses
The elementary stresses is used by the Draft Standard to denote the symmetric total stress
tensor,
ij
, whose components are the six elementary stresses (three direct, three shear and three
further complementary shear stresses) computed at every point specified in a calculation or
experimental method. The calculation method may be, for example, a finite element analysis.
According to the Draft Standard, however, for the purposes of the stress categorisation route
through DBA, the elementary stresses are to be determined according to the following
assumptions:
the material behaviour is linear-elastic, in accordance with Hookes law
the material is isotropic
displacements and strains are small (first-order theory).
As a consequence, this route does not cover failure by elastic or elasto-plastic instability
(buckling), as these modes of failure would involve large displacements or strains. Where the
analysis reveals significant compressive stresses, the risk for buckling must be assessed
separately. In addition, this route does not cover failure by temperature-driven creep-rupture.
9.2.2 Equivalent Stress
The equivalent stress,
eq
, is computed as a scalar from stress tensors due either to individually
applied external loads, or sums or individual stress tensors of the same category (see Subsection
9.4, below) resulting from loads considered to be acting simultaneously.
In the stress categorisation route, equivalent stresses are to be computed using the Tresca or
maximum shear stress theory; that is the equivalent stress is the absolute value of the maximum
principal stress difference.
9.2.3 Equivalent Stress Range
Equivalent stress ranges,
eq
, are computed for the purposes of fatigue assessment. Ranges are
defined and calculated similarly to equivalent stresses, with the exception that such calculations
involve differences in individual stress tensors, or differences in the sums of similar category
stress tensors, corresponding to two loading conditions.
204
Fatigue assessment requires the maximum value(s) of equivalent stress ranges to be used.
Identification of the pairs of load conditions that achieve this maximum may be difficult, given
that loadings may vary independently and principal stress directions may change as a result of
these types of variation.
9.3 STRESS DECOMPOSITION
9.3.1 Preamble
The stress categorisation route within the Draft Standard requires that the elementary stresses,
which vary in the though-thickness direction with respect to the wall of a pressure vessel, are
decomposed into the following components:
membrane
bending
linearised
non-linearised
stresses. These are defined at every point within the body of the vessel under analysis.
9.3.2 Membrane
The membrane stress tensor, (
ij
)
m
, is constant in the through-thickness direction and is equal to
the average of the elementary stress tensor at the median surface.
9.3.3 Bending
The bending stress tensor, (
ij
)
b
, is the tensor whose components vary linearly in the through-
thickness direction.
9.3.4 Linearised
The linearised stress tensor, (
ij
)
l
, is the sum of the membrane and bending stress tensors.
9.3.5 Non-Linearised
The non-linearised stress tensor, (
ij
)
nl
, is the difference between the elementary stress tensor
and the linearised stress tensor.
9.4 STRESS CLASSIFICATION
Stress classification is made according to the criteria set out in Table 9.1.






205
Primary Stresses Secondary Stresses
Peak
Stresses
Stresses that satisfy laws of equilibrium of external
loads.
Basic characteristic of primary stresses is that, in cases
of high (non-admissible) increment of external loads,
deformations upon full plastification of load-bearing
section considerably increase without being self-
limiting.
Stresses developed by
constraints due to geometrical
distortions, by use of materials
with different elastic moduli
under mechanical loads,
constraints due to differential
thermal expansions.
They lead to plastic deformation
when equalising different local
distortions in the case of
exceedance of yield strength.
Basic characteristic of
secondary stresses is that they
are self-limiting; i.e. local flow
deformation leads to limitation
of stress.
Membrane Bending Membrane Bending
Definition as per stress
decomposition
General Local
Distributed in the
structure such
that no essential
redistribution of
load occurs as a
result of yielding.
Localised in the
structure such
that yielding
will cause
redistribution of
loads
Definition as
per stress
decomposition
Definition as
per stress
decomposition
Definition as
per stress
decomposition
P
m
P
L
P
b
Q
m
Q
b

That part of
total stresses
left upon
subtraction
of primary
and
secondary
stresses
Table 9.1 Classification of Stresses According to the Stress Categorisation Route of DBA
9.5 PROCEDURE
9.5.1 Preamble
The overall procedure for the DBA stress categorisation route set out in the Draft Standard
comprises two main parts:
determination of equivalent stresses and stress ranges
assessment of those stresses and stress ranges against prescribed criteria.
9.5.2 Determination of Equivalent Stresses and Stress Ranges
The part of the overall procedure to be followed in respect of stress analysis is set out in nine
steps in the Draft Standard, as follows:
1. For each point within the body or the component to be designed, calculate the
elementary stresses resulting from each load acting separately
ij

2. Decompose the elementary stresses from Step 1 into membrane and bending
components (
ij
)
m
, (
ij
)
b

206
3. Classify the stresses from Step 2 into:
general primary membrane stresses (
ij
)P
m

local primary membrane stresses (
ij
)P
L

primary bending stresses (
ij
)P
b

secondary membrane stresses (
ij
)Q
m

secondary bending stresses (
ij
)Q
b

4. Determine the sum of the stresses calculated in Step 3 to account for load cases
involving coincident loads:
sum of general primary membrane stresses (
ij
)P
m

sum of local primary membrane stresses (
ij
)P
L

sum of primary bending stresses (
ij
)P
b

sum of secondary membrane stresses (
ij
)Q
m

sum of secondary bending stresses (
ij
)Q
b

5. From the stresses in Step 4 deduce:
the primary membrane stresses (
ij
)P
m
or (
ij
)P
L

the total primary stresses (
ij
)P = [(
ij
)P
m
or (
ij
)P
L
] + (
ij
)P
b

the sum of the primary and secondary stresses
(
ij
)P+Q = [(
ij
)P
m
or (
ij
)P
L
]+ (
ij
)P
b
+ (
ij
)Q
m
+ (
ij
)Q
b

6. Calculate the equivalent stresses corresponding to:
sum of general primary membrane stresses (
ij
)P
m
(
eq
)P
m

sum of local primary membrane stresses (
ij
)P
L
(
eq
)P
L

the total primary stresses (
ij
)P (
eq
)P
7. For each set of two normal operating loads that may be determinant, calculate the
range of primary plus secondary stress, and then calculate the corresponding equivalent
stress:
[(
ij
)P+Q] (
eq
)P+Q
9.5.3 Assessment of Equivalent Stresses and Stress Ranges
The criteria set correspond to:
limitation of equivalent primary stresses
limitation of equivalent stress ranges resulting from primary plus secondary stresses
limitation of primary stresses in cases of triaxial states of stress
207
simplified elasto-plastic analysis
prevention of incremental collapse resulting from thermal ratcheting.
In the main, limitations are set such that the equivalent stresses concerned are less than some
factor times the nominal design stress f.
The limitation of equivalent primary stresses is set as follows:
( )
( )
( ) f 5 . 1 P
f 5 . 1 P
f P
eq
L eq
m eq




where the value of f is consistent with the type(s) of loading conditions considered, and the
calculation temperatures at those conditions.
The limitation of equivalent stress ranges resulting from primary plus secondary stresses
is set as follows:
( ) f 3 Q P
eq
+
The limitation of primary stresses in cases of triaxial states of stress is set in the following
way. Where the stress analysis leads to a triaxial state of stress and whenever the smallest
tensile principal stress exceeds half the highest tensile principle stress, to avoid brittle failure
caused by limited ductility in such stress states, the following condition must be satisfied:
| | e appropriat as , R or ; R ; ; max
t / 0 . 1 p t / 2 . 0 p 3 2 1

Simplified elasto-plastic analysis is permitted under the conditions that (
eq
)P+Q > 3f
providing:
( ) f 3 Q P
*
eq
+
is the stress range equivalent to (
eq
)P+Q but calculated without taking into account bending
stresses of thermal origin. In addition to this condition:
a detailed fatigue analysis is performed according to Clause 18 of the Draft Standard
the material is such that R
p
< 0.8R
m
, i.e. a limit to the yield ratio of the material is
observed
there is an absence of risk of incremental collapse by thermal stress ratcheting in
regions of general primary membrane stress.
The thermal ratcheting phenomenon is the mechanism of incremental collapse that may occur in
certain conditions under the effect of cyclic thermal loads in conjunction with a permanent
pressure action. It can result in plastic deformation that increases by about the same amount in
each thermal cycle and can quickly lead to unacceptably large deformations and possible
rupture. Meeting the criterion regarding equivalent stress ranges resulting from primary plus
secondary stresses, above, guarantees the absence of thermal ratcheting.
208
The Draft Standard presents what amounts to an interaction equation involving the equivalent
membrane stress due the pressure alone, and the equivalent primary plus secondary stress range
due to the thermal load alone which guarantees the absence of thermal ratcheting providing
there is a linear thermal gradient. This interaction equation is given by:
( )
| |
( ) ( )
( )
| |
( ) ( )
0 . 1
f 5 . 1
P , P
5 . 0 for ,
f 5 . 1
P , P
1 4
f 5 . 1
5 . 0
f 5 . 1
P , P
0 for , 1
f 5 . 1
P , P
f 5 . 1
m eq m eq t , Q P
eq
m eq m eq t , Q P
eq

|
|
.
|

\
|


+
+

209
10. RELIABILITY IMPLICATIONS
10.1 PREAMBLE
In the preceding sections of this annex, an overview has been given of the draft British
Standard/European Standard for the design of pressure vessels. Whilst the focus has been on
this particular document, it is typical of its type, containing as it does DBF and DBA, as two
completely contrasting approaches to the design of pressure vessels.
The purpose of this section is to summarise the concepts in each of DBF and DBA and indicate
some of the safety and structural reliability implications associated with each.
10.2 DESIGN BY FORMULA (DBF)
10.2.1 DBF Process
The DBF portions of the standard (the overwhelming majority of the document) are concerned
with designing for two situations:
Static strength in relation to pressure containment
Fatigue strength in relation to pressure fluctuations.
Other limit states are not addressed because the general tenor of DBF is for limiting stresses
to within the strength of the material. The methodologies employed fall into the category of
allowable stress / permissible stress / working stress approaches.
In essence the pressure vessel is divided into a number of generic parts:
Main shell (cylindrical; spherical; or rectangular unreinforced or reinforced)
Welds (governing or non-governing)
Ends (dished hemispherical or torispherical; conical; flat unpierced welded or
bolted, pierced; domed bolted)
Openings (main shell isolated, multiple, close to shell discontinuity).
Each part is designed separately, without direct reference to other parts, or consideration of the
system as a whole. It should be emphasised, however, that the system is largely statically
determinate. There is no structural redundancy available, although a certain amount of plastic
redistribution may be possible after first yield and before rupture.
In terms of static strength, analysis is performed using formulae of varying degrees of simplicity
on a component-by-component basis. These formulae are used to size the required thickness of
the component, or for rating purposes to determine the pressure capability (resistance) of an
already existing component.
When sizing a component, the loads taken will be the worst combination of pressure and
temperature. No partial safety factors for either pressure or temperature are specified, although
a relief valve may limit the upper value of the pressure. The calculation temperature includes an
(unspecified) adequate margin of safety to cover uncertainties in temperature prediction. The
resistance of the component is largely a function of the nominal design stress and its geometry.
210
The nominal design stress is factored from a proof strength, or minimum tensile strength, of the
material at the calculation temperature. In dealing with welds, a further reduction factor on the
nominal design stress is introduced. The required thickness is then augmented by various
amounts to allow for corrosion, tolerance in manufacture / fabrication, and to make up to a
manufacturers standard thickness.
In rating a component, the analysis thickness may only be used. The analysis thickness is made
up from the thickness that remains after excluding the corrosion allowance and any otherwise
advantageous tolerance allowances.
In considering fatigue strength two types of analysis are permitted, they are similar in
principle and differ only in the complications introduced to calculate stress ranges. The loads
may be interpreted as pressure fluctuations. These are converted to stress ranges and a variety
of correction factors applied to cover effects due to thickness, temperature, notches and elastic-
plastic cycle conditions. The resistance may be interpreted as the number of cycles to failure at
a particular stress range determined via tables and curves for the appropriate welded details.
The nominal design stress is used in the computation, and there would be the partial safety
factor associated with this in a way similar to static pressure assessment. Otherwise there
appear to be no safety factors applied to the pressure fluctuations.
10.2.2 Reliability Implications of DBF
In essence, a pressure vessel is a series system. The failure of any one of the individual parts
(as outlined above) would result in failure of the system as a whole.
The term failure must be carefully qualified. In terms of static strength in relation to pressure
containment, the methods used to size and/or rate the vessel are limiting stress, so failure in this
context may imply the attainment of some factored value of nominal design strength of the
vessel material. All the potential variability of the parameters pertinent to the vessel design is
deemed to be accounted for by the safety factor applied to the nominal design strength. The true
safety factor will not be known, but will be influenced partly by the facility to deform plastically
after first yield.
In the case of a cylindrical shell, for example, the collapse pressure (i.e. when full plasticity
through the shell wall thickness is attained) occurs at a factor times the pressure at which first
yield is reached given by the following:
2
e
i
i
e
D
D
1
D
D
ln 2
|
|
.
|

\
|

|
|
.
|

\
|

At the limit of thickness to outside diameter ratio of 0.16, specified in Subsection 4.2.1 above,
this ratio is 1.43. This ratio reduces very rapidly with thickness to outside diameter ratio: when
this is 0.1, the factor becomes 1.24. Nevertheless, except in the case of very slender vessels,
there is some reserve of strength over that at which yield first occurs, contributing in an
unquantified way to an enhancement of the safety factor.
211
The DBF approach has a number of other in-built features that have been developed
empirically over time and that have been shown to produce acceptable designs; with this
approach it has not been necessary to consider reliability explicitly.
Whilst, in structural reliability terms, variability in the material nominal design strength and
dimensions could be introduced into the simple DBF equations and the latter used as failure
functions, no information is available regarding the model uncertainty of the formulations. The
probability of failure of an individual component would be governed by the variability in the
material yield or proof strength and the components dimensions. On the positive side, given
the prescriptive nature of DBF, there would be no variability in designs produced by different
operatives using the same inputs the model uncertainty, although not known, would be the
same in all cases.
A further possible complication should be understood. Temperature may form part of the
loading along with pressure, and will also affect the nominal design strength of the vessel
material thus appearing on both the load and resistance sides of the failure functions.
This should be recognised in any structural reliability formulation.
Finally, the system view of the pressure vessel should be taken, and this may include allowance
for the presence of any interacting systems, such as a pressure relief device. In such an
example, the probability of failure of the system will be the union of the probabilities of failure
of all the generic components (as outlined above) taken as an intersection with the probability
that the pressure relief value will not function.
10.3 DESIGN BY ANALYSIS (DBA)
10.3.1 DBA Process
The DBA process appears to be a true limit state, partial safety factor approach, dealing with
actions and resistance(s). Multiplying and dividing characteristic values of action and
resistance, respectively, by partial safety factors gives design values of these quantities.
The characteristic values of actions depend upon whether they are: permanent, variable,
exceptional, or pressure and/or temperature. In addition, the characteristic value taken for an
action depends on the uncertainty in its data via the magnitude of the coefficient of variation of
its statistical distribution. A fractile of 95% or 97% is used depending on whether the action is
permanent or variable. The characteristic values for pressure/temperature are set as reasonably
foreseeable highest and lowest values rather than statistically defined, and may be governed,
for example, by the existence of a pressure relief valve.
Resistances are deemed to be constructed from geometric data and material strength. Nominal
values of dimensions are to be used as characteristic values, along with minimum guaranteed
material strength(s) which include the effects of temperature.
Partial safety factors on individual actions depend on whether the action is permanent, variable,
or pressure/temperature; exceptional actions are not covered quantitatively. Values range
through 1.0, 1.2, 1.35 and 1.5. Rules are provided for dealing with combined actions, including
numerical factors to be applied in specific cases to allow (presumably) for improbability of
extreme events occurring together.
212
The partial safety factors related to resistance are, in fact, material factors applied to the
characteristic material strength. The values depend on a number of factors: loading case
(operational, hydraulic test, and so forth), limit state. Additionally, partial safety factors are
quantified for material type and characteristics of the stress-strain curve of the material.
A large number of limit states is enumerated, which are classified in the normal way as being
ultimate or serviceability. Some are considered in more detail, and amongst them two structural
ultimate limit states are of principal interest: ductile rupture/gross plastic deformation and
progressive plastic deformation (ratcheting). In sharp contrast to the majority of the document
dealing with DBF, no simple formulae are given for the analysis of pressure vessels in the DBA
context.
In the case of ductile rupture/gross plastic deformation, a global (possibly, in the sense of the
complete pressure vessel) analysis is suggested (possibly, as a finite element analysis). The
implication is for a limit load/collapse type global analysis involving an elastic-perfectly plastic
(or rigid-perfectly plastic) material response coupled with small deformation theory. The
actions are set to design values, are applied as proportional loading to the structure, and the
material resistance is set to the characteristic value(s). The objective of the analysis is to
increase the actions proportionally, until a limit load or limiting deformation is reached at an
accepted proportion of the design resistance. As an alternative to this, it is suggested that
stresses are monitored and the limiting condition is such that the maximum primary equivalent
stress reaches a safe proportion of the characteristic material strength.
In the case of ratcheting, again a global (possibly, in the sense of the complete pressure vessel)
analysis is suggested (possibly, as a finite element analysis). The implication is for a limit
load/collapse type global analysis involving an elastic-perfectly plastic (or rigid-perfectly
plastic) material response, along with an appropriate flow rule, coupled with small deformation
theory. The objective of the analysis appears to be to ensure that the design is such that the
cycle of principal stresses resulting from the multiple application of the actions always lies
within the chosen yield locus (Tresca or von Mises).
10.3.2 Reliability implications of DBA
DBA as set out in the document combines both limit state concepts with analysis methodology.
In dealing with the limit state concepts, a better understanding of failure is conveyed than
through the DBF route. Failure is more clearly associated with undesirable events or outcomes,
and this is more logical conceptually speaking.
Partial safety factors are given in explicit terms. However, it appears that the values of the
partial safety factors given have not been calibrated via structural reliability methods against
acceptable failure probability targets, or previous successfully-performing designs. Instead, it is
believed that the partial factors have been taken from existing codes. The partial factors on the
pressure terms are based on the Danish pressure vessel code which has been in use for at least
ten years, and additional partial factors for general loading terms are based on Eurocode 3 for
general steelwork design.
Partial factors are generally calibrated to reflect the uncertainty in the loading or resistance
terms that they are applied to. Thus the values of the factors depend on the resistance
formulations, and they inter-relate with the other factors on the loading terms. In the DBA
method the resistance formulations and their associated uncertainty are unknown, since they are
213
open to choice by the designer, and could include finite element modelling. Thus, the
comparative level of safety associated with any design by this method is unknown.
The Draft Standard is unspecific on the procedure and validation requirements for the DBA
method. In most other general structural engineering applications linear analysis methods,
including finite element methods, are used to determine (component) stresses, and these are
compared against design formulations for (component) resistance which incorporate safety
factors, either implicitly or as explicit partial factors. Since conforming designs produced in this
way are believed to have an acceptable level of safety, the safety factors can be assumed to
inherently include an allowance for the uncertainty associated with determining acting stresses
by using typical structural analysis procedures.
However, in the DBA approach, particularly for the direct approach, the uncertainty associated
with the evaluated acting stresses and the evaluated resistance are largely unknown. For
unusual or complex applications in other structural engineering fields, finite element methods
are sometimes used to determine resistance when it is judged that general design resistance
formulations are not applicable. However, it is general good practice for efforts to be made to
validate the finite element modelling against physical tests and/or against accepted design
formulae.
The use of un-validated modelling particularly finite element modelling, can introduce
significant uncertainty for example, by differences introduced through the uses of different
software packages coupled to user differences: different meshing, flow rules used, convergence
criteria adopted, problem interpretations and so forth. This uncertainty can be reduced by
repeating the analyses independently, by using different analysis packages, by using different
finite element formulations, and/or by using different meshes, etc. However, such effort would
only normally be practical in special circumstances.
It may be argued that a specialist designer of pressure vessels will be able to assemble a
portfolio of models and modelling details, along with appropriate experience that will have been
built up over many years. However, it is possible for a relative novice to pressure vessel design
to use the code. Thus with the DBA approach the competence of the designer is much more of
an issue. Whereas, it should be relatively straightforward to check a design undertaken by DBF
for compliance to the code, it is very much harder to check and assess a design undertaken by
DBA since the results of the analysis may depend on subtle modelling assumptions, etc. It is
important to note that rules for the DBA methods are in informative Annexes to the Standard,
thus it is unlikely for inexperienced designers to use this approach.
However, an advantage of the DBA approach is that the allowance of the use of collapse-type
analyses may result in more efficient, cost-effective designs as compared with the limiting stress
approach of DBF. There may be, however, a change in the level of safety that would be very
difficult to quantify as a result of the differences between the methods associated with DBF and
DBA.
However, because the DBA approach is relatively free of prescriptive rules it is important that
the designer, reviewer or checker, and regulator are aware of the potential problems. The
guidelines in Section 9 of the main report, whilst primarily for assessing reliability and risk
analysis, may prove useful in assessing a design produced by the DBA method. Level 1
guidelines that are particularly relevant are:
214
Does the analysed problem provide a complete solution to the real physical problem?
Is the failure function modelling adequate for predicting failure?
Have all the consequences of failure been adequately considered?
Does the stated acceptance criterion represent a reasonable and responsible level of
safety, and, bearing in mind the confidence in the [reliability] analysis, is it adequately
satisfied?
Clearly, the guidelines become more relevant if a reliability analysis of a pressure vessel design
is undertaken and presented. (The direct use of reliability-based analysis and design methods is
not covered by the draft code, but such methods could be used for re-assessment or upgrading
an existing vessel).
In principle, the limit state format lends itself to the performance of structural reliability
calculations. However, unless the analysis methods are relatively simple to incorporate within
the use of a standard reliability analysis software program, via readily programmed failure
functions, the performance of reliability analysis in the DBA context will be very difficult.
Large, multi-purpose finite element programs will not in general allow this, unless specifically
designed to do so.
Finally, in structural reliability terms, the system view of the pressure vessel needs to be taken.
This view will differ from that for DBF if each limit state has been considered on a global basis,
rather than a component-by-component basis. Under these circumstances, the probability of
failure of the system will be the union of the probabilities of failure corresponding to each of the
credible limit states. As before, if other systems interact (such as a pressure relief system) due
account of these must be taken.
10.4 CLOSURE
Table 10.1 summarises the main discussion points raised in the preceding subsections on a
comparative basis.
215

DBF DBA
Allowable stress, permissible stress, working
stress
Limit state, partial safety factors on actions
and resistance
Non-rigorous definitions of failure restricted
to attainment of limiting stress
Clear definitions of failure related to
undesirable events or consequences
Uncertainty and level of safety embodied in
single material factor
Uncertainty and level of safety embodied in a
number of partial safety factors
System considered on component-by-component
basis
System potentially considered on global basis
Relatively simple equations for failure
functions of components
Potentially complex (e.g. finite element)
models of system if considered globally
Potential for incorporation into reliability
calculations if limited definition of failure
accepted
Difficulties for incorporation into reliability
calculations unless relatively simple limit
state failure functions can be defined
Unknown model uncertainties Unknown model uncertainties, with potential
for large additional variability between
operatives
Uncalibrated safety factors, levels of safety not
known explicitly but are acceptable based on
experience
Partial safety factors believed to be
uncalibrated, levels of safety not known
Table 10.1 Main Discussion Points: DBF and DBA
As far as this case study is concerned, it is the last point in Table 10.1 that is of most
significance.
Printed and published by the Health and Safety Executive
C1 12/01
CRR 398
25.00 9 780717 622382
ISBN 0-7176-2238-X

Potrebbero piacerti anche