Sei sulla pagina 1di 35

An Introduction to Structural Reliability, FORM, and LRFD Design

Steven R. Winterstein

An Introduction to Structural Reliability, FORM, and LRFD Design Contents


1 The 1.1 1.2 1.3 Basic Safety Margin Formulation Second-Moment Reliability Index . . . . . . . . . . . . . . . . The Normal Case . . . . . . . . . . . . . . . . . . . . . . . . . Full-Distribution Reliability Index . . . . . . . . . . . . . . . . 5 5 6 7 9 10 13 16 17 19 22 25 26 26 27 28 29 31 31 33 34 35

2 Modern Probabilistic Analysis and FORM 2.1 The Four Steps of FORM . . . . . . . . . . . . . . . . . . 2.2 FORM: Summary and Critical Discussion . . . . . . . . . 2.3 Revisiting M =R L With FORM . . . . . . . . . . . . . 2.3.1 Applying FORM . . . . . . . . . . . . . . . . . . . 2.3.2 Numerical Results . . . . . . . . . . . . . . . . . . 2.4 Using Simulation to Improve FORM: Importance Sampling 3 Modern Probabilistic Design: LRFD 3.1 N -year Failure Probabilities . . . . . . . . . . . . . . . 3.2 Target Failure Probabilities: Annual vs N -Year Levels 3.3 Design Load Factors . . . . . . . . . . . . . . . . . . . 3.4 Varying Return Periods on Nominal Loads . . . . . . . 3.5 Multiple Factors and LRFD . . . . . . . . . . . . . . . 4 The 4.1 4.2 4.3 4.4 API-LRFD Code for Fixed Oshore Structures Basis for Load Variability . . . . . . . . . . . . . . . . Reliability Analysis Using Normal Models . . . . . . . Reliability Analysis Using Lognormal Models . . . . . . The Load Factor: Why 1.35? . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . .

. . . .

. . . .

. . . .

List of Figures
1 2 Relation between safety index, , and failure probability, pF . Asymptotic result given by Eq. 7. . . . . . . . . . . . . . . The two-dimensional Gaussian probability density function, f (u1 , u2 ). Note that contours of constant probability are circles in the u1 u2 plane. . . . . . . . . . . . . . . . . . . . . The FORM approximation in the case of n=2 variables. The true limit state (red) is replaced by its tangent line (green) at the point of minimum distance to the origin. In this case the distance is F ORM =3; hence FORM estimates pF as 1(3)= 1.3103 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The simple limit state M =R L. The failure region (M < 0) lies above (and to the left of) the red line. Numerical values assumed are mL =1, L =0.36, mR =2.17, R =.15. R and L are assumed independent and normally distributed; therefore, lines of constant probability are ellipses. . . . . . . . . . . . FORM solution for the simple limit state M =RL. The limit state has been transformed to standard normal variables, UR and UL . The failure region is above (and to the left of) the red line. Constant probability contours are now circles with radius . The most likely point in the failure region is at U , where the failure boundary intersects the circle with radius F ORM =3. Thus FORM estimates pF as (3)=1.3 103 . Nonlinear limit state (red) shown together with (a) original standard normal sampling density (centered at origin) and (b) importance sampling density (centered at the most probable failure point, or design point). . . . . . . . . . . . . . . . . Load and Response Probability Density Functions, as shown in API LRFD Commentary. . . . . . . . . . . . . . . . . . . . 7

. 12

. 14

. 20

. 21

. 24 . 32

List of Tables
1 2 Relating the safety index to the failure probability pF . . . . 8 Comparing steps of simulation and FORM methods to estimate the failure probability pF . . . . . . . . . . . . . . . . . . 16

An Introduction to Structural Reliability, FORM, and LRFD Design 1 The Basic Safety Margin Formulation

In designing systems to perform adequately in a probabilistic sense, it is common to separately consider some type of structural load, L, and corresponding resistance, R. The associated safety margin M can then be dened as simply the dierence between R and L: M =RL (1)

The problem need not be an actual structure, of course; L and R can more generally be considered the system demand and capacity, dened in consistent units. Given an actual structure, for which we have a probabilistic description of R and L, the analysis problem is to determine the associated probability of failure; i.e., pF = P [f ailure] = P [R < L] = P [M < 0] (2)

Again, the terminology need not be literal; failure can denote any type of inadequate performance by the system under consideration. The corresponding design problem is to begin with a target pF that is judged acceptable, and seek to determine a design variable (e.g., mean value of the resistance R) adequate to achieve this permissible pF .

1.1

Second-Moment Reliability Index

In early code work, it was common to adopt a second-moment description of the random variables at hand. Here, this would include the mean values of R and L, mR and mL , and their corresponding standard deviations, R and L . These can be used to calculate the corresponding mean and standard deviation of M in Eq. 1: mM = mR mL M =

(3) (4)

2 2 R + L

If R and L are correlated, the correlation coecient R,L would also be needed. Unless otherwise specied, we generally assume here that R and L are independent.

(As noted in the footnote, Eq. 4 assumes R and L are uncorrelated.) This second-moment description is not sucient to uniquely determine the failure probability, pF . Qualitatively, however, we may argue that safer designs (smaller pF values) can generally be achieved by either 1. increasing the mean, mM , of the safety margin; or 2. decreasing the standard deviation, M , of the safety margin. To incorporate both eects, we can dene a second-moment reliability index as mM (5) = M Qualitatively, larger values are generally associated with greater safety; i.e., they can be achieved by either eect (1) or (2) above. Thus, while full probabilistic design (based on a full probabilistic description of R and L) seeks to achieve a desirably small pF value, second-moment design can seek to achieve a desirably large value.

1.2

The Normal Case

One can, of course, relate to pF if the complete probability distributions of R and L are known. A particularly important case occurs when R and L are independent and normally distributed. In this case, M =R L is also normal, so that pF can be expressed as pF = P [M < 0] mM M mM < ] = P[ M M = P [UM < ] in which is dened as in Eq. 5, and UM =(M mM )/M is the standard normal variable (with zero mean and unit variance) associated with M . Using (u) to denote the cumulative distribution function (CDF) of a standard normal variable, this result becomes pF = () (6)

Exact Asymptotic

0.1

0.01 Failure probability, pF

0.001

0.0001

1e-05

1e-06

1e-07 0 1 2 3 Safety index, beta 4 5

Figure 1: Relation between safety index, , and failure probability, pF . Asymptotic result given by Eq. 7. The variation of pF with is shown in Table 1. For large , is asymptotically given by the analytical expression pF = 2 1 exp( ) 2 2 (7)

Figure 1 shows that this asymptotic result accurately follows exact results for large values.

1.3

Full-Distribution Reliability Index


= 1 (pF ) = 1 (1 pF ) 7

Eq. 6 can be inverted, to express in terms of 1 , the inverse normal CDF: (8)

Relating Safety Index to pF pF = () pF 1.0 1.6101 1.5 6.7102 2.0 2.3102 2.5 6.2103 3.0 1.3103 3.5 2.3104 4.0 3.2105 4.5 3.4106 5.0 2.9107 0.84 0.2 1.28 0.1 1.64 0.05 2.05 0.02 2.33 0.01 3.09 103 3.72 104 4.26 105 4.76 106 5.21 107 Table 1: Relating the safety index to the failure probability pF . In modern reliability analysis, Eq. 8 is used to dene the reliability index . In other words, given full probability distributions of R and L (whether normal or not), the failure probability pF can be evaluated, and Eq. 8 used to dene a corresponding reliability index, , from this full-distribution information. Eq. 8 thus replaces Eq. 5 in general; the two coincide only when M is normally distributed. In this case Eq. 5 is a useful basis for load factor design, as shown below.

Modern Probabilistic Analysis and FORM

In modern reliability analysis practice, it is common to adopt a limit-state formulation. This associates adequate performance of an engineering system with a positive value of the safety margin M . Conversely, a negative value of M implies failure: M = g(X1 , ..., XN ) < 0 Failure (9)

Here, the Xi s represent random variables, which may relate to loads, material properties, geometry, etc. Of course, the simplest situation is merely M =RL (10)

in terms of some type of structural load, L, and corresponding resistance, R. In actual practice, though, the situation is generally somewhat more complex. Complicating factors include the following: The number of random variables, N , may become large. Loads may be decomposed into dierent sources (e.g., wind, wave, gravity, etc.). Each of these loads, in turn, may be split into underlying natural eects (e.g., wind speed, wave height) and inuence coecients that relate these eects to load levels (e.g., drag coecients). Considering the resistance R, separate random variables may be assigned to represent dierent element properties in a large nite element mesh. Simple probability models (e.g., normal, lognormal) may be deemed insuciently accurate for the case at hand. Indeed, early probability work adopted these models as much for the sake of computational convenience as for accuracy. It may be inadequate to make the simplifying assumption that the random variables Xi are either mutually independent or perfectly correlated. As more data are being collected on environmental eects (e.g., simultaneous values of wind speed, wave height and period, current speed, etc.), one wishes to accurately model their imperfect correlations. And, again considering a nite element model, strengths of neighboring elements may be correlated, to a degree directly related to the element length assumed: a ner mesh will lead to more elements, whose properties are more highly correlated. 9

Finally, the limit state function g may be increasingly complexin particular, it may no longer represent a closed-form function but rather an algorithm (e.g., a nite element analysis). It has become common to treat these problems of structural reliability, with all their complexity, with the methods of FORM and SORM (Firstand Second-Order Reliability Methods). These are described below. Additional details can be found in various textbooks; e.g., Methods of Structural Safety by H.O. Madsen, S. Krenk, and N.C. Lind (Prentice-Hall, 1986).

2.1

The Four Steps of FORM

The FORM method can be viewed in a set of four steps. These steps are discussed in detail in this section. The following section summarizes the steps briey, and provides additional critical perspective. Therefore, the reader already possessing some knowledge of FORMor those seeking only a broad overviewmay rst consider the next section, and return to this section for clarication as necessary. The four steps of FORM are as follows: 1: Formulation. The basic mechanics of the problem is dened through the limit-state function. Specically, it is required to dene the gX function M = gX (X1 , ..., XN ) < 0 Failure (11) Here we use the notation gX , with subscript X, to emphasize that the safety margin is given in terms of the physical random variables Xi (e.g., loads and resistances). 2: Transformation. While step 1 describes the mechanical situation, it does not provide a statistical characterization of the random variables Xi . This is the purpose of step 2. There are, of course, many ways to characterize a probability distribution; e.g., a single random variable Xi is equivalently characterized by its probability density function, fXi (x), its cumulative distribution function, FXi (x), or a transformation that relates Xi to a random variable of known distribution type, such as the standard normal. 10

It is the last of these that is most convenient for FORM purposes. Specically, the (vector) transformation T is required that relates the vector of physical variables, X, to a corresponding vector of independent, standard normal variables (denoted U): X = T(U) (12)

In the special case where all the Xi are independent, this transformation decouples into a set of one-dimentional functional relations:
1 Xi = Ti (Ui ) = FXi [(Ui )] ; FXi (Xi ) = (Ui )

(13)

In words, this result states simply that each percentile of U is mapped to the corresponding percentile of X: the median U (U =0) is mapped to the median value of X, the 95% fractile of U (U =1.64) is mapped to the 95% fractile of X, etc. If the Xi are correlated, a hierarchical set of transformation such as Eq. 13 are developed conditionally; e.g., rst for X1 , then for X2 given X1 , then for X3 given X1 and X2 , etc. This is known as the Rosenblatt transformation. The end result of steps 1 and 2 is a limit state function g(U), a function of standard normal variables (hence no subscript X) whose negative value connotes failure: pF = P [M = g(U) < 0] ; g(U) = gX [X = T(U)] (14)

Note that FORM requires the limit state g(U) in terms of U. As Eq. 14 shows, this is found in practice by successively applying the transformation in Eq. 12 and then the mechanical limit state in Eq. 11. In computer implementations, these are commonly two separate subroutines. 3: Computation. It is important to note that as yet there has been no approximation made; one can always transform the physical variables to a standard normal vector U. Thus, Eq. 14 gives the exact failure probability pF . Evaluation of Eq. 14 generally involves some type of approximation, however. One approach is simulation, in which many outcomes of the standard Gaussian vector U are generated, and pF estimated by the fraction of these for which failure occurs (g(U) < 0). FORM/SORM, in contrast, seeks only the most likely value of U within the failure region. We denote this value as U , sometimes described 11

2D Gaussian PDF (max=0.16) 0.12 0.08 0.04 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 0.5 1 1.5 2

-2 -1.5 -1 -0.5

0.5

1.5

-0.5 -1 -1.5 2 -2

Figure 2: The two-dimensional Gaussian probability density function, f (u1 , u2 ). Note that contours of constant probability are circles in the u1 u2 plane. as the design point. In terms of the standard normal variables Ui , the probability density decreases exponentially with the square of the distance, |U|, between U and the origin. This is illustrated by Fig. 2. The most likely point on the failure surface is also the most likely point within the region; this is the value of U for which |U| is minimum, given g(U) = 0 (15)

Eq. 15 is an optimization problem, involving a constraint g(U) = 0 that is typically nonlinear. Various constrained optimization routines are available for this purpose. Once this point is found, its distance

12

from the origin is denoted F ORM : F ORM = min |U| ; g(U) = 0 (16)

Here we use the subscript FORM to distinguish this choice of from others, e.g., the second-moment denition =E[M ]/M . 4: Approximation. Finally, FORM replaces the true limit state, g(U) = 0, by a linear limit state that is tangent to g at the design point. With this linearized g-function, the corresponding failure probability is pF,F ORM = (F ORM ) (17)

This estimate becomes increasingly accurate as failures become more rare (smaller pF ), and the actual details of the failure surface away from the design point become increasingly irrelevant. The SORM method uses the curvatures at the design point to t a second-order surface, and derives correction factors on Eq. 17 based on these curvatures. Figure 3 illustrates the situation in the simple case of 2 variables. The true limit state, g(u1 , u2 )=0, generally varies in a nonlinear fashion. The minimum distance point is found here to be F ORM =3; hence FORM estimates pF to be 1(3)= 1.3103 . Note that this value is in fact the probability of falling above (to the right of) the green line rather than the true red curve; hence FORM is conservative in this case.

2.2

FORM: Summary and Critical Discussion

In summary, the four steps of a FORM analysis are as follows: 1: Formulation. The mechanics of the problem are dened through the limit state gX : M = gX (X1 , ..., XN ) < 0 Failure 2: Transformation. The statistics of the problem are dened through a transformation T to a standard normal vector U: X = T(U) 13

6 Standard normal variable, u2 4 2 0 -2 -4 -6 True g(u1,u2)=0 Linearized g(u1,u2)=0 Constant prob: beta = 1 std dev Constant prob: beta = 2 std dev Constant prob: beta = 3 std dev -6 -4 -2 0 2 4 Standard normal variable, u1 6

Figure 3: The FORM approximation in the case of n=2 variables. The true limit state (red) is replaced by its tangent line (green) at the point of minimum distance to the origin. In this case the distance is F ORM =3; hence FORM estimates pF as 1 (3)= 1.3103 . 3: Computation. The minimum distance to the failure surface, F ORM , is found numerically by solving the optimization problem F ORM = min |U| ; g(U) = gX (X = T(U)) = 0 4: Approximation. The failure probability is estimated from F ORM to be pF,F ORM = (F ORM ) Steps 1 and 2 generally take the form of algorithms (subroutines), supplied by the analyst for each new problem at hand. Standard FORM computer routines commonly have a distribution library; that is, they have implemented 14

the transformation in step 2 for various types of probability distributions. In this case, the user merely species which type of probability model should be assigned to each variable Xi in the physical limit statestep 2 is then carried out automatically. The optimization problem in step 3 is the main computation of the FORM technique. Because the constraint g(U) is generally nonlinear, this optimization is typically more costly than either (1) unconstrained optimization or (2) optimization under linear constraints (i.e., linear programming). Again, a standard FORM computer implementation will contain such an optimization routine. Typically it involves gradient search; i.e., at each iteration the gradient of g with respect to each variable Xi is required. This in turn requires (N + 1) evaluations of gthe original one and N additional calls, each with one of the Xi s incremented slightly. If convergence requires I iterations, say, the net requirement is then I(N + 1) evaluations of g. (Typical convergence may require on the order of I=10 iterations, at least for fairly well-behaved problems.) These considerations are particularly relevant for cases where evaluation of the g-function is relatively costly; e.g., involving a separate nite element analysis. As a historical matter, development of FORM/SORM techniques dates back to the 1970s and 80s. As a result, these techniques are often imperfectly understood, and generally limited in their use to the structural engineering community. Far more use is made of general Monte-Carlo simulation, which can of course be used to solve the same problem. It is unfortunate that FORM is less well-known than simulation, and often viewed as its competitor. While simulation has the advantage of conceptual simplicity, its main disadvantage lies in its growing expense as failures become more rare. Accurate results require sucient simulations to generate at least several failures perhaps on the order of 10. This will require roughly 10/pF simulations; e.g., 10,000 simulations if pF =103 , 100,000 simulations if pF =104 , etc. The cost of FORM, however, does not grow with the rareness of failures; in fact, the FORM estimate of pF becomes increasingly accurate as failures become rarer. Thus, simulation and FORM should be viewed as complementary methods the former more suitable for frequently occurring events, and the latter for cases of rare events (e.g., structural failures). Table 2 summarizes the conceptual similaritiesand dierencesbetween simulation and FORM. FORM also provides additional insight beyond its pF estimate. Specically, its design point gives the combination of random variables, Xi , most likely to cause failure. This information can suggest conditions on which to 15

Step 1

SIMULATION Simulate vector U of independent, uniform variables Transform to physical variables: X=T(U) Calculate safety margin M =g(U) If M < 0, NF AIL =NF AIL + 1

FORM Consider vector U of independent, normal variables Transform to physical variables: X=T(U) Calculate safety margin M =g(U) For M =0, nd |U|

Repeat over NSIM simulations

Repeat until =min |U| is found

pF

Estimate pF as

NF AIL NSIM

Estimate pF as ()

Table 2: Comparing steps of simulation and FORM methods to estimate the failure probability pF . focus additional modelling attention. It can also suggest conditions to be used as the basis for LRFD design checking conditions.

2.3

Revisiting M =R L With FORM

To illustrate the four steps of FORM, it is convenient to consider the simplest safety margin case, M =RL (18) We assume that R and L are normally distributed, with means mR and mL 2 2 and variances R and L . We further assume that R and L are statistically independent. It follows that M is also normally distributed, with moments mM = mR mL ; M = 16
2 2 R + L

(19)

Thus, the failure probability pF is given by pF = P [M < 0] M mM 0 mM = P [UM = < ] M M

(20)

Because M is normal, UM is standard normal with distribution function . The failure probability then becomes mR mL mM (21) = pF = () ; = 2 2 M R + L 2.3.1 Applying FORM

We now apply the four FORM steps in this case: 1: Formulation. The physical limit state is simply given as M = R L = X1 X2 = gX (X) (22)

2: Transformation. Because both R and L are already normal, they can be related to standard normal variables, UR and UL respectively, by simply rescaling and shifting: R = mR + R UR ; L = mL + L UL (23)

The safety margin M =g(U), expressed in terms of the standard normal variables UR and UL , is found by substituting Eq. 23 into Eq. 22: M = (mR + R UR ) (mL + L UL ) = mM + R UR L UL (24)

Here we have substituted mM =mR mL from Eq. 19. We may also 2 2 rescale by M = R + L : M = M [ (L UL + R UR )] (25)

in terms of the unitless quantities =mM /M , and the importance factors L and R : L R L = ; R = (26) M M
2 2 The squared importance factors, L and R , reect the relative contri2 2 2 bution to the total variance M due to L and R respectively. There2 2 fore, L + R =1.

17

2 2 3: Computation. The minimization problem here is to minimize UL + UR , subject to the condition that M (as given in Eq. 25) is equal to 0. In this simple case it is convenient to use the method of Lagrange multipliers. In general, this method states that to minimize f (U) subject to g(U)=0, one should seek the unconstrained minimum of the function f (U) + g(U). In our case this becomes 2 2 min D(UL , UR , ) = UL + UR + [ (L UL + R UR )]

(27)

Setting derivatives to zero, D = 2UL L = 0 UL = UL D = 2UR R = 0 UR = UR L 2 R 2 (28) (29) (30)

D = 0 = L UL + R UR Substituting Eqs. 2829 into Eq. 30, = 2 2 (L + R ) = 2 2

(31)

Using this result to eliminate , Eqs. 2829 yield the nal results for the design point (most likely point in the failure region): UL = L ; UR = R The distance to this point is then F ORM = UL2 + UR2 = = mM mR mL = 2 2 M R + L (33) (32)

4: Approximation. Finally, the FORM estimate of pF follows from Eq. 17: pF,F ORM = (F ORM ) = ( mR mL
2 2 R + L

(34)

Thus, FORM yields the exact pF value in this case, as derived in Eq. 21. This is because the actual limit state in this case is linear in the underlying 18

normal variables, UL and UR (Eq. 24). More commonly, g will generally be nonlinear in its standard normal variables U, and Eq. 34 will represent an approximation to the true pF value. Finally, as noted earlier, the design point (UL , UR ) also contains useful information. The values found in Eq. 32 can be transformed, with Eq. 23, to nd the most likely failure point in terms of the original load and resistance variables: L R = mL + L UL = mL + L L = mL (1 + |L |VL ) = mR + R UR = mR + R R = mR (1 |R |VR ) (35) (36)

in terms of VL and VR , the coecients of variation of L and R, and the absolute importance factors |L |=L /M and |R |=R /M . The terms in parentheses can be viewed as (unitless) factors on the mean load and resistance; i.e., factors that scale up the mean load (and scale down the mean resistance) to account for their relative variabilities, and for the desired reliability level . This is the topic of LRFD; i.e., Load and Resistance Factor Design methods. 2.3.2 Numerical Results

It is perhaps useful to illustrate the foregoing results with numerical values. Specically, assume that the load L and resistance R have the following moments: mL = 1.00 ; L = 0.36 ; mR = 2.17 ; R = .15 The units here are rather arbitrary; we essentially express all values as multiples of the mean load mL . Figure 4 shows the resulting limit state function, M =R L, together with lines of constant probability. Because L > R , these iso-probability lines take the form of ellipses. 1: Formulation. The limit state function is simply M = R L = X1 X2 = gX (X) 2: Transformation. Because L and R are independent and normal, they are linearly related to UL and UR : L = mL + L UL = 1.00 + 0.36UL R = mR + R UL = 2.17 + 0.15UR 19

4 3.5 Load, L: mL = 1, sigL = .36 3 2.5 2 1.5 1 0.5 0 0

Failure boundary Constant prob: beta = 1 std dev Constant prob: beta = 2 std dev Constant prob: beta = 3 std dev

0.5 1 1.5 2 2.5 3 3.5 Resistance, R: mR = 2.17, sigR = .15

Figure 4: The simple limit state M =R L. The failure region (M < 0) lies above (and to the left of) the red line. Numerical values assumed are mL =1, L =0.36, mR =2.17, R =.15. R and L are assumed independent and normally distributed; therefore, lines of constant probability are ellipses. 3: Computation. The safety margin M is written as a function of U: M = R L = (2.17 + .15UR ) (1.00 + .36UL ) = 1.17 + .15UR .36UL 5 12 = .39(3 + UR UL ) 13 13 This last result is in normalized form; i.e., the squares of its coecients on UR and UL sum to one. Thus U , the point of minimum distance along the surface M =0, is given directly by 5 12 U = (UR , UL ) = 3( , ) = (1.15, 2.77) 13 13 20

Normalized load, UL = (L - mL ) / sigL

-2 Failure boundary Constant prob: beta = 1 std dev Constant prob: beta = 2 std dev Constant prob: beta = 3 std dev -4 -2 0 2 4 Normalized resistance, UR = (R - mR ) / sigR

-4

Figure 5: FORM solution for the simple limit state M =R L. The limit state has been transformed to standard normal variables, UR and UL . The failure region is above (and to the left of) the red line. Constant probability contours are now circles with radius . The most likely point in the failure region is at U , where the failure boundary intersects the circle with radius F ORM =3. Thus FORM estimates pF as (3)=1.3 103 .

21

4: Approximation. The reliability index F ORM is simply the distance |U |= (1.15)2 + (2.77)2 =3. Thus pF is estimated to be pF,F ORM = (3) = 1.3 103

2.4

Using Simulation to Improve FORM: Importance Sampling

As shown in Table 2, ordinary Monte-Carlo simulation provides a direct method to estimate pF without FORM. Alternatively, if FORM has been used to estimate pF , specialized simulation methods are available to improve this estimate. These commonly use the techniques of importance sampling. In general, FORM/SORM proceeds by Finding the design point U (at which failure is most likely) Using a linear (FORM) or quadratic (SORM) approximation to gU) at U Importance sampling seeks to perform a biased simulation, in which the sampling density is centered at the design point, U , to better study the geometry of the failure surface near its region of highest probability. Figure 6 shows a typical situation. The question then is how to use results of this biased simulation to construct unbiased estimates of pF . To illustrate, we rst consider the basic Monte Carlo method. We rewrite the failure probability as an expected value: pF = f (u)du =
g(u)<0 all u

I(u)f (u)du = E[I(u)]

(37)

in terms of the indicator function I(u) = 1 g(u) < 0 0 g(u) 0 (38)

Given Nsim simulations, the basic Monte Carlo method estimates the average value E[I(u)] by its sample mean: Monte Carlo: pF = I(u) = 22 1 Nsim I(ui ) =
i

Nf ail Nsim

(39)

Now we consider the eect of sampling with a dierent density, h(u), which may for example be centered at the design point U (e.g., Fig. 6). We may rewrite Eq. 37 as pF = f (u)du =
g(u)<0 all u

I(u)f (u) I(u)f (u) h(u)du = Eh [ ] h(u) h(u)

(40)

Here the notation Eh reects that the expectation is to be taken with respect to the sampling density h. Analogous to Eq. 39, the expected value in Eq. 40 can also be estimated by the sample mean over the (importance sampled) simulations: Importance Sampling: pF = I(u)f (u) 1 = h(u) Nsim I(ui )f (ui ) h(ui ) (41)

Comparing this result with the ordinary Monte-Carlo estimate in Eq. 39, several dierences are notable: The importance sampling simulation will have more failures. Hence there will be more non-zero terms in the sum in Eq. 41 than in the ordinary Monte-Carlo sum (Eq. 39). To compensate for the biased sampling, each non-zero term in Eq. 41 is down-weighted by the ratio of likelihoods, f (ui )/h(ui ), between the original and sampling densities.

23

6 Standard normal variable, u2 4 2 0 -2 -4 -6 True g(u1,u2)=0 beta = 1.5 std dev beta = 3.0 std dev beta = 1.5 std dev beta = 1.5 std dev 6

Original density: Original density: Sampling density: Sampling density: -6

-4 -2 0 2 4 Standard normal variable, u1

Figure 6: Nonlinear limit state (red) shown together with (a) original standard normal sampling density (centered at origin) and (b) importance sampling density (centered at the most probable failure point, or design point).

24

Modern Probabilistic Design: LRFD

For a given engineering system, the goal of probabilistic analysis is to estimate its probability of failure, pF , with respect to a particular mode of failure. For an engineering system yet to be built, the corresponding design question is to seek a design parameter (e.g., adequate size/strength) to ensure that its failure probability, pF , is less than an acceptable level. In modern structural reliability, the analysis problem is commonly addressed with FORM (First-Order Reliability Methods). The inverse, design problem is commonly addressed via LRFD (Lord and Resistance Factor Design). To illustrate concepts, we consider a simple example with a load L and resistance R, and hence a safety margin M =RL (42) In particular, we consider L to be the annual maximum load experienced by a structure. From loads data, assume we nd the coecient of variation of L to be L = 0.3 (43) VL = mL (In fact, the API-LRFD code suggests that the 20-year maximum load has coecient of variation VL =0.37; this will be revisited in the following section.) Note that Eq. 43 can be equivalently stated as L =0.3mL . To oset this variability in loading, assume further that we have designed the resistance, R, to be 190% of the mean annual load: R = 1.9mL (44) From Eqs. 4344, we can immediately calculate the second-moment reliability index from Eq. 5: mM mR mL = = 2 M + (0.3mL )2
R

02 + (0.3mL )2 0.9mL = = 3.0 (45) 0.3mL If we further assume that L (and hence M ) is normally distributed, the corresponding failure probability is found from Table 1: pF = (3.0) = 1.3 103 25 (46)

1.9mL mL

3.1

N -year Failure Probabilities

Recall that by the assumptions made in this example, the load L considered is the annual maximum. The resulting failure probability, pF in Eq. 46, should likewise be considered as the failure probability that applies in a single (arbitrary) year. It may also be of interest to estimate pF (N ), the probability of failure over N years (e.g., the service life of the structure). In terms of the annual failure probability pF , the N -year failure probability is roughly N times as large. More precisely, pF (N ) = P [ failure in at least 1 year of N ] = 1 P [ no failure in N years ] = 1 [1 pF ]N

(47)

in which pF is the annual failure probability. Note that the last step leading to Eq. 47 assumes failures in dierent years are statistically independent, and that the annual failure probability pF remains constant in time (i.e., no structural deterioration is considered.) For small pF , pF (N ) N pF . For our numerical example, pF =1.3103 so that for an N =20 year structural life, we nd pF (20) = 1 [1 1.3 103 ]20 = 2.57 102 which is nearly the same as 20pF =2.60 102 .

3.2

Target Failure Probabilities: Annual vs N -Year Levels

As in the foregoing presentation, it has become common to rst consider the failure probability pF per year, i.e., an annual pF level. Correspondingly, it has become standard to state acceptable safety levels in terms of minimally permissible values of the annual failure probability, pF . Once stated, any such target value of the annual failure probability, pF , could be immediately translated into an equivalent target value of the N -year failure probability, pF (N ), through Eq. 47. The key issue here is not whether targets are interpreted in terms of the failure probability in 1 year, 20 years, or whatever. The main point is that if worker safety is the governing concern, the acceptable failure probability 26

should not be a function of the service life of the structure. In other words, a worker should not be penalized, in terms of increased safety risk, if he/she happens to work on a structure with a shorter service life. Risks associated with worker safety should be held at a (desirably low) xed level for all structures, regardless of their service lives. It has become common to express this acceptable worker risk level in terms of annual risk, though this need not be the case.

3.3

Design Load Factors

In our original example, we assumed that the resistance R=1.9mL was given, and found that the associated reliability index was equal to 3. For design purposes, we may set the resistance to a design load level Ldes : R = Ldes We may then go through the same process as in Eq. 45, now with an arbitrary value of and VL , to nd a consistent design load Ldes : = Solving for Ldes yields Ldes = L mL ; L = 1 + VL (48) Ldes mL mM = M VL mL

Eq. 48 is a simple expression that denes a load factor L . Its behavior is intuitably sensible: it raises the design load Ldes above the mean level mL , by an amount that increases with either the desired safety level (increasing ) or the intrinsic load variability (increasing VL ). In particular, if we assume VL =0.3 and we require the safety index =3, we nd L = 1 + VL = 1 + 3(0.3) = 1.9 This corresponds to the value used in our original design (Eq. 44). Of course, the virtue of Eq. 48 is that it can accommodate a wide range of and VL values.

27

3.4

Varying Return Periods on Nominal Loads

The foregoing load factor, L =1.9, is considerably greater than onein contrast, the API LRFD associates wave loads with a factor of L =1.35. One reason for our large load factor here is that it is used to multiply mL , which is the mean annual load. Because we generally require annual pF values on the order of 103 or less, large load factors are needed if we start with mean annual loads. For this reason, it is common to base design on a nominal load, Lnom , with a return period N >> 1 year (e.g., the N =20- or N =100-year load). Formally, it is simple to adjust the design load formula in Eq. 48 to reect an arbitrary nominal load Lnom : mL (49) Ldes = L Lnom ; L = B(1 + VL ) ; B = Lnom The term B is the bias factor; it rescales the load factor (typically downward) to compensate for the larger nominal load, based on longer return periods than a single year. For example, if the annual maximum load L follows a normal distribution, the bias factors can be found from results in Table 1. For example, by denition the 20-year load is exceeded with probability 0.05 in an arbitrary year. Therefore, Table 1 shows that L20 is 1.64 standard deviations above mL , the mean annual loadand by similar reasoning, that L100 is 2.33 standard deviations above mL : L20 = mL + 1.64L = mL (1 + 1.64VL ) L100 = mL + 2.33L = mL (1 + 2.33VL ) These results imply that in the normal case, the following Ldes denitions are equivalent: Ldes = (1 + VL )mL 1 + VL L20 Ldes = 1 + 1.64VL 1 + VL Ldes = L100 1 + 2.33VL (50) (51) (52)

To illustrate, we return to the previous numerical example, for which VL =0.3 and =3. In this case the foregoing results become Ldes = 1.90mL 28 (53)

1.90 L20 = 1.27L20 (54) 1.49 1.90 Ldes = L100 = 1.12L100 (55) 1.70 Of course, Eq. 53 is the original design rule with which we began. Eqs. 5455 are equivalent design rules, based respectively on L20 and L100 . If Eqs. 5355 are equivalent, why should one be preferred to another? The argument is that by using a longer return period, Eq. 54 and especially Eq. 55 are less sensitive to the precise probability model chosen (here, the normal model). In contrast, Eq. 53, which is based on the annual maximum load, makes the load factor do more work, i.e., extrapolate farther from its nominal (mean annual) load. If sucient site-specic data are available to estimate L20 or L100 , it is preferable to use these as nominal loads so that the load factor requires a lesser extrapolation, hence is less sensitive to the precise choice of probability model. Ldes =

3.5

Multiple Factors and LRFD

The preceding section shows the simplest possible case, in which a single quantityhere, the annual maximum load Lis random. One load factor is therefore sucient to create a design rule which ensures a specied reliability level. The more general case of LRFD (load and resistance factor) codes utilizes multiple factors. As its name implies, it generally employs both a resistance factor R as well as a load factor L : Rdes Ldes ; Ldes = L Lnom ; Rdes = R Rnom (56)

Here Lnom and Rnom are nominal values of the load and resistance, respectively. Lnom is generally some upper percentile of the load, while Rnom is usually some lower percentile of the resistance. As discussed above, by choosing these nominal quantities in the relevant tails of their distributions (rather than at their mean values), more case-specic distribution information is obtained and the factors, R and L , need not extrapolate unduly far. In terms of the resistance Rnom , it may also be easier to ask the structural fabricator to ensure a suciently high value of a lower-fractile resistance (e.g., by quality control), rather than to require raising the mean resistance level. In general, factors such as R and L and more generally, multiple factors on dierent loads (e.g., dead and live loads)are found numerically, by 29

calibrating a range of test cases to best match a desired reliability level. It has also been suggested that the design checking levels Ldes and Rdes be chosen at or near the most likely failure point, sometimes known as the design point as found in a FORM (First-Order Reliability Method) analysis. In the simple case where R and L are both normally distributed, this results in the design values Ldes = mL + L L = mL (1 + L VL ) Rdes = mR R R = mR (1 R VR ) (57) (58)

in terms of the unitless importance factors, L =L /M and R =R /M . Note that the squares of these sum to one; in the special case where resistance variability can be neglected, R =0, R =0, L =1 and Eq. 57 reduces to Eq. 48. For multiple loads Li , each design load would be i Li away from its respective mean value. (Thus loads with smaller variabilities, such as dead loads, require smaller load factors than live loads. This is one of the prime arguments in favor of LRFD codes: compared with codes that use a single safety factor, multiple-factor codes can achieve more uniform reliability across cases with dierent ratios between dead and live loads.) In Eqs. 5758, the quantities in parentheses represent central load and resistance factors; i.e., they scale the central (mean) values of the load and resistance, mL and mR . To adjust these to apply instead to the nominal load and resistance, we equate Eq. 56 and Eqs. 5758: L = BL (1 + L VL ) ; R = BR (1 R VR ) (59)

in terms of the bias factors, BL =mL /Lnom and BR =mR /Rnom . This result for L generalizes the single load factor result of Eq. 49. These results will be illustrated below in the discussion of the API-LRFD code.

30

The API-LRFD Code for Fixed Oshore Structures

Here we discuss the specic LRFD code format adopted by the American Petroleum Institute (API). Considing wave loads only and ignoring dynamics, it adopts a design load of the form Ldes = L Lnom ; L = 1.35 ; Lnom = L100 (60)

(Dead loads are assigned a lower factor; e.g., dead =1.10.) The commentary of this code gives some basis for Eq. 60. Most relevant details are contained in its Figure COMM. LRFD-1, which we reproduce here (Fig. 7). Note that the load, L, that is considered is the maximum 20-year load level. As noted in the gure, it is assumed specically that The 20-year load L has mean mL =0.7Lnom . (Note from Eq. 60 that the nominal load, Lnom , here is associated with a 100-year return period. Therefore, this assumption states that on average, the maximum load in 20 years is 70% as large as the 100-year load.) The coecient of variation of L is given as 0.37 The tubular bending resistance, R, has mean mR =1.84Lnom , and coefcient of variation VR =0.11.

4.1

Basis for Load Variability

As is common in such cases, the loading variability (VL =0.37) dominates over that of the resistance (VR =0.11). It is therefore of interest to comment upon the basis of the VL value. A rough justication is to consider the annual maximum load to be caused by the wave with the annual maximum wave height, H. A Morison drag model suggests L = Cd H 2

(61)

API RP2A-LRFD: Planning, Designing and Constructing Fixed Oshore Platforms Load and Resistance Factor Design.

31

Figure 7: Load and Response Probability Density Functions, as shown in API LRFD Commentary. 32

An exponent slightly larger than 2 is sometimes suggested here, to reect inundation eects (i.e., vertically integrating wave kinematics to the exact, time-varying free surface). Taking logarithms of this result, ln L = ln Cd + 2 ln H 2 2 ln L = ln Cd + (2ln H )2 2 2 VL VCd + (2VH )2

(62)

(This last result follows the general relation that lnX is approximately equal to VX , for VX << 1.) VCd reects predictive errors in Morisons equation; measured loads on the Ocean Test Structure in the Gulf of Mexico suggest VCd of about 0.25. VH reects variability intrinsic in nature; i.e., the variation between maximum wave heights in dierent 20-year periods. Taking VH =0.14, we nd VL (.25)2 + (2 .14)2 0.38 (63)

which is roughly the value cited by API. Note that this suggests VL has roughly equal contribution due to natural randomness (in maximum wave height) and predictive error (in Morisons equation).

4.2

Reliability Analysis Using Normal Models

To estimate the precise reliability level, one needs to adopt probability distribution models for L and R. The simplest case is to assume that L and R are independent and normally distributed. Their respective standard deviations are given by L = VL mL = (0.37)(0.70)Lnom = .2590Lnom R = VR mR = (0.11)(1.84)Lnom = .2024Lnom The foregoing results (e.g., Eq. 5) can now be applied: = mM M = = mR mL
2 2 R + L 1.84 0.70

(.2024)2 + (.2590)2

1.14 = 3.47 .3287

(64)

33

The associated probability of failure (in this 20-year reference period) is then pF (20) = (3.47) = 2.6 104 (65) The annual pF value is roughly 1/20 of this value; it can be found precisely by inverting Eq. 47.

4.3

Reliability Analysis Using Lognormal Models

The numerical result for pF in Eq. 65 relies on the assumption that R and L are normally distributed. Another common modelwhich is in fact suggested by API in Fig. 7is the lognormal distribution. If we assume L and R are both lognormal, it follows that ln L, ln R, and hence ln M are normal. The foregoing results (e.g, Eq. 5) can then be applied with statistics of the natural logarithms of the original variables: = mln M ln M = = mln R mln L
2 2 ln R + ln L

ln(mR /mL )
2 2 ln R + ln L

(66)

This last result expresses equivalent results in terms of the median values, mR and mL , of R and L. (Median values are convenient parameters to report for lognormal distributions.) To apply Eq. 66, standard results are available to relate the input statistics to those needed in the analysis: ln L = ln R = ln M =
2 ln(1 + VL ) = 2 ln(1 + VR ) = 2 2 ln L + ln R =

ln(1 + (0.37)2 ) = .3582 ln(1 + (0.11)2 ) = .1097 (.3582)2 + (.1097)2 = .3746 0.70 1 + (0.37)2 1.84 1 + (0.11)2

mL = mR =

mL
2 1 + VL mR 2 1 + VR

= =

= .6565 = 1.829

34

Combining these results, Eq. 66 yields the reliability index = ln(1.829/.6565) ln(mR /mL ) = = 2.74 ln M .3746

and hence the (20-year) failure probability is now given by pF (20) = (2.74) = 3.1 103 (67)

Note that this failure probability, based on the lognormal model, is nearly 10 times as large as estimated from the normal model (Eq. 65). Because the load variability dominates, the critical modelling issue here is the broadness of the upper tail of the probability density of L. The lognormal model has a considerably broader upper tail than the normalhence its larger pF prediction. The major point here is that absolute pF values should be interpreted carefully, as they depend on the precise choice of probability model adopted. In view of this, these analyses are perhaps best used in a relative sense: while one may question the absolute values obtained, the relative values across dierent designs can reasonably be used to suggest their relative safety levels.

4.4

The Load Factor: Why 1.35?

As seen above, precise reliability results depend on precise modelling assumptions. If we use the normal model, for example, and choose the most-likely failure point for design, we reach the result of Eq. 59: L = BL (1 + L VL ) (68)

in which L =L /M , which is given here by 0.8. The bias factor BL =0.7, as shown in Fig. 7. As noted above, the absolute reliability level is somewhat in question. Assuming the reliability index =3, Eq. 68 becomes L = 0.70[1 + (0.8)(0.37)(3.0)] = 1.32 (69)

which is reasonably close to the reported value of 1.35. Recall that the actual value has not been chosen to coincide with the most-likely failure point in this case; rather, it is has been found by calibration across various test cases. 35

Potrebbero piacerti anche