Sei sulla pagina 1di 37

Analysis of wind induced fatigue damage for existing

structure
Vlad Leontie,
Ruhr University Bochum,
EKIB Departament,
M.Sc. Computational Engineering
11.15.2014

Abstract
How to evaluate damage from fatigue for optimum design structures and for under
designed structure

Chapter 1
Introduction

Chapter 2
Theoretical basis
2.1
2.1.1

Wind tunnel experiments


Wind tunnel flow

Wind tunnel experiments have been performed using the wind tunnel of the Department of Civil and Environmental Engineering Sciences at the Ruhr University
Bochum. The physical model used for experiments was a simple block shaped building (cuboid) with no openings or additional elements as it can be seen in Figure 2.1.
The dimensions of the cuboid h/d/l follow the edges ratio 0.6/1.0/2.0 corresponding
to 94/156/312 [mm].

l
0

90

h
d
0

Figure 2.1: Analysed model with its specific parameters,wind tunnel flow directions
The surface pressures due to wind action are monitored inside the tunnel at 4 different cross-sections which are positioned at 7.8, 39, 78 and 156 [mm] with respect
to the front edge, the latter corresponding to the position at the centre bay. Each
section is equipped with 23 taps which are connected to pressure sensors using 300
mm length tubes. The arrangement of the tabs and their specific location is shown in
Figure 2.2
Inside the tunnel pressures fluctuations occur, which can affect and influence the
accuracy of the analysis.
The fluctuations inside the channel can be classified in:
2

156 [mm]
4 10 15 15

15

19

19

15

15 15 10 4

10

10

15

15

15

94 [mm]

94 [mm]

15

20

20

20

20

10

10

Figure 2.2: Position and orientation of the 23 tabs on the frame


Pressure fluctuations (turbulent flow)
Noise (turbulent fluctuations) with its origin from the motor or fan. The respective positions follow the tunnel set up which are shown in figure 2.3. The
pattern of the noise can be described by sinus and cosine waves.
The turbulent flow can be characterised by a mixture of eddys. Each eddys basic
shape can be compared with the shape of an American football ball. They can be
classified in big, middle and small sized. If one hour is selected, a limited number of
eddys is observed. If the sizes from one hour are compared with the average sizes obtained from an infinite number of hours, it can be concluded that a random ensemble
is obtained. Each hour is unique and this is leading to differences in the statistical
parameters.
The Wind tunnel produces flows and fluctuations which induce pressure at the flow
level. The noise from the fan and motor also leads to fluctuations. Therefore the
noise is measured separately in order to connect the pressure signals. However, the
methods used for extracting the noise will not be discussed in this Master Thesis.
Overcoming the noise fluctuations needed to be done in order to obtain a more accurate precision of the results and therefore, a floor mounted tap was used to remove
the background noise from the motor. Furthermore, an additional tab containing the
noise information is considered which is required for pressure computation. The noise
produced by the fan was removed using digital filtering.

2.1.2

Object of the study

The purpose of the thesis is to evaluate damage from fatigue due to wind induced
loads for well designed structures and for under-design structures. Load cycles must
be computed in order for the structure to accumulate damage. The surface pressures
and the structural responses of the trapezoidal sheeting elements are also one of the
study focus. From the sheeting elements point of view, two approaches have been
considered:

Figure 2.3: Wind tunnel of the RUB sketch, positioning of the motor and fan

Span 1

Span 1

Span 2

Span 2

Span 3

Figure 2.4: two and three field beams


Structural systems with 1 single bolt which is situated exactly at the tab position.
Structural systems with 3 and 4 bolts which is the same as with considering
systems of continuous beams with two or three fields as it can be seen in figure
2.4.
One of the purposes of considering the 2 types of sheeting elements, with one bolt
and with 3 or 4 bolts respectively, is to be able at the end of the analysis to compare
the damage, the damage curves and the damage accumulation. The next computational steps for both approaches are: analysis of the model for obtaining the structural responses, the results introduction into the Rainflow counting method (input),
running the algorithm (a FORTRAN routine was created especially for the Rainflow
method), obtaining the results, namely number of fatigue cycles (output) and the
comparison of the results obtained. For the elements with 1 bolt, the necessary data
to be able to perform the following steps must be provided from computation of the
pressure coefficients at the tabs positions along with the statistical model parameters.
The values of the design values of the pressure coefficients for the maximum and minimum matrices will be introduced as structural responses in the Rainflow method
algorithm. This analyse will give the first set of values,loading cycles, needed for the
4

2
1
7

17

12

18

1
23

Figure 2.5: possible arrangements of two and three span beams


comparison. For the elements with 3 or 4 bolts,respectively the two and three span
beams, the structure must be covered entirely with specific types of sheeting. Taking
into account the possibility of choosing between 2 and 3 span beams there are several
options of doing these like either 4 three span beams and 5 two span beams or 8 two
span beams and 2 three span beams or any combination of the above mentioned systems. A sketch of the possible arrangement of the sheeting elements is shown in the
following figure 2.5

2.1.3

Structural responses

Influence lines
The lines of influence are defined for a given action such axial force, shear force or
bending moment and are graphs which illustrate the functions variation at a given
point on a structure due to the application of a load with unit value on the element.
The influence lines vary and depend on the type of action which is applied: axial,
bending or shear. They are usually generated by applying independently at several
points of a structure the unit load and determining the functions value due to this
load at the other points (all points except the point where the action was applied).
The load is repeated at all the required points and the calculated values at each
point are connected together for the influence lines generation.[5] The lines can be
used for concentrated or for distributed loads. For the concentrated loads, a force
with unity value is moved along the element at specific points with a relatively dense
repositioning. The spectrum must be very dense so that one could be able to plot
the exact graph of the function. For the distributed loads, the procedure is repeated
but instead of one single concentrated load, a distributed load with unity value and
specific width is positioned at different stations along the element. Both types, the
concentrated loads and the distributed loads follow the same pattern. The effect of
the distributed loads can be obtained based on the concentrated loads example, only
integrating the influence line of the concentrated force at the end over the length.
The influence lines have a very important role when designing the elements, as they
are able to show the maximum expected responses of the structure members and ensure that one specific member will not fail in some given circumstances. They help in
5

M2org
1.4

1.2

1.2

1.0

1.0

0.8

0.8

influenceacoefficient

influenceacoefficient

M105
1.4

0.6
0.4
0.2
0.0
-0.2

0.6
0.4
0.2
0.0
-0.2
-0.4

-0.4
-0.6

-0.6

'1'

-0.8
0.0

0.5

1.0

1.5

2.0

1 ... ... Spana1

-0.8

bending moment mid-span field 1

Spana2

0.0

0.5

1.0

1.5

bending moment at support 2

V1

V1

1.4

1.4

1.2

1.2

1.0

1.0

0.8

0.8

0.6

0.6

YaAxisaTitle

YaAxisaTitle

2.0

0.4
0.2

0.4
0.2

0.0

0.0

-0.2

-0.2

-0.4

-0.4

-0.6

-0.6

-0.8

-0.8
0.0

0.5

1.0

1.5

2.0

0.0

0.5

1.0

1.5

2.0

vertical support reaction at 1

vertical support reaction at 2

Figure 2.6: influence lines for a two span beam [1]


identifying and computation of the maximum resulting responses for different types
of functions such as shear or axial and bending. For the three span beams the plots
are shown in figure 2.4. The influence lines graphs for shear and bending for the two
and three span beams are drawn in figure 2.6 and 2.7. The way of obtaining such
graphs is explained in following lines. On the Y axis, the value of the shear force is
noted, on the X axis, the value of the length between the supports is given. E.g. 1,2
and 3 are the support positions. One can easily notice that at these points the value
of the shear force resultant is 1, due to the fact that an eventually applied force in
these points will be giving an force equally with itself and of inverse sign. The reference point for the above graph is 1, the fluctuation of the reactions is observed from
this point. E.g. At X= 0.5, this means at the mid-point of the span 1, the value of
the reaction is Y=0.4. At X=1.5, the value of the reaction is Y=-0.08 and at X=2.5,
the value of Y=0.03 is obtained figure 2.7. For the force variation along the line, the
value of the reaction is read and recorded only at the support vertical reaction 1.
In order to be able to deal with the 2 and 3 field beams for the given structure, there
are two possible methods of solving this situation, namely the influence line method
which consists in small steps of load repetition on the two or three span beams
for obtaining the influence lines as shown in figure 2.6, or the Hermite polynomial
method which is time consuming faster and more accurate as the previous one, but
implies appropriate knowledge of the working way of the influence lines and how they
are created.
The first method needs an eventual structural system composed of 2 and 3 beams
for the sheeting elements as it can be seen in figure 2.8. Afterwards, on each element
the force of unity is applied repeatedly and the values of the shear forces and of the
bending moments are summarized in a table similar to tabular 2.1 and the graphs
are drawn.
After finishing the procedure one would need between 4-6 hours for each structural
system. From all possible structural solutions of covering the frame with systems
of three field beams and two field beams, for a better error refinement all should
6

M205

M2org
1.4

1.2

1.2

1.0

1.0

1.0

0.8

0.8

0.8

0.6
0.4
0.2
0.0
-0.2
-0.4

influencepcoefficient

1.4

1.2

influencepcoefficient

influencepcoefficient

M105
1.4

0.6
0.4
0.2
0.0
-0.2
-0.4

0.0

0.5

1.0

1.5

2.0

2.5

3.0

0.2
0.0

-0.6

-0.8

-0.8

0.4

-0.2
-0.4

-0.6

-0.6

0.6

-0.8
0.0

0.5

1.0

1.5

2.0

2.5

3.0

0.0

0.5

1.0

1.5

bending moment mid-span field 1

2.0

2.5

bending moment at support 2

bending moment mid-span field 2

V1

V2
1.4

1.2

1.2

1.0

1.0

'1'

0.6
0.4

0.2

... ...

Spanp1

Spanp2

0.0
-0.2
-0.4

Spanp3

influencepcoefficient

1.4

0.8

YpAxispTitle

3.0

0.8
0.6
0.4
0.2
0.0
-0.2
-0.4

-0.6

-0.6

-0.8

-0.8
0.0

0.5

1.0

1.5

2.0

2.5

3.0

0.0

XpAxispTitle

0.5

1.0

1.5

2.0

2.5

3.0

vertical support reaction at 1

vertical support reaction at 2

Figure 2.7: influence lines for a three span beam[1]

System

S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
S12
S13
S14
S15
S16
S17
S18
S19

Systems numbering and reactions position


Moment
Moment
Vertical
Vertical
M2
M3
reaction
reaction
V2
V3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
-

Table 2.1: Systems numbering and their corresponding reactions


be analysed. That means a total of 19 structural systems that need to be analysed.
Taking into account that the medium time for one single system takes up to 4 hours,

IV

III

18

19

II

20

3
2

21
22

23

Figure 2.8: covering system of 2 and 3 span beams


one would need about 10 days to finish the whole set of data, per day doing about
2-3 systems. The calculation errors of the Robot structural analysis and Ruck-Zuck,
basic static design programs, are between 1.70375 exp31 and 2.72545 exp15 and of
chores, it is also the human error, which is not so accurate as the programs error.
The invested time can lead also to fatigue which can also lead to error predisposition.
Taking into account the time volume, error predisposition, the great amount of data
that needs to be analysed and the fact that the influence lines working method is
already known (how are obtained and how they can be integrated over length), the
best method for this item was chosen as being the Hermite polynomial method.

Hermite polynomials method


Requirement for this method is proper knowledge of the influence lines working and
obtaining mode and proper knowledge of Hermite polynomials. The algorithm is
much shorter as the above presented method and more accurate also. A routine on
how to obtain integrals of the influence line was created using FORTRAN. The first
part of the method is similar with the first steps of the above presented one and
namely a structural system of covering the frame with tabs with three span beams
and two span beams is selected. For each field of the beams, the Hermite polynomials simulate the diagrams of the influence lines using a 3-rd order polynomial, as a
result of applying 23 forces (corresponding to each tab). The next step is a list of
influence lengths for each tab and obtaining the integrals of the 19-th possible systems over the given length. The output will be 52 structural responses corresponding
to the applied 23 forces.After performing this step, the functions and a matrix of
23(tabs) X 53(structural responses) will be obtained. Using these matrices and Origin a possible drawing of the damage curve will be possible, of chores by taking into
account also the randomness of it.
The Hermite polynomials are an orthogonal polynomial sequence, each two polynomials are orthogonal to each other under inner product, that are used and encountered
8

in probabilistic and combinator calculations. A polynomial sequence can be defined


as a sequence of polynomials usually with positive indexes in which each represents
the degree of the corresponding polynomial.[5]
They are one of the most often common examples of the Appel sequences, namely a
polynomial sequence pn (x), n = 1, 2, 3, . . . which satisfies the inequality
n pn1 (x) =

d
pn (x)
dx

obeying the umbral calculus [6]. The umbral calculus can be described as the similarity between the apparent unrelated polynomial equations and the drawing techniques
used to represent them or as the classification of almost all classical combinatorial
identities for polynomial sequences.
The Hermite polynomials are defined as in formula 2.1:
x2

Hn (x) = (1)n exp 2

x2
dn
exp 2
dx

(2.1)

They are also known as the probabilists Hermite polynomials because the term from
equation 2.2 is the probability density function for a normal distribution with the
expected value of 0 and standard deviation of 1.

x2
1
= exp 2
2

(2.2)

In Figure 2.9 the graph of the first six Hermite polynomials is represented with re-

Figure 2.9: Hermites polynomials graph depending of the polynomial degree


spect to the polynomials degree n=0 to 5. [7] The interest is focused in this method
on the polynomials up to 3-rd degree. The shape of the first six probabilistic polynomials is shown below:
H0 (x) = 1
H1 (x) = x
9

H2 (x) = x2 1
H3 (x) = x3 3x
H4 (x) = x4 6x2 + 3
H5 (x) = x5 10x3 + 15x
Proprieties
H 3 (x) is a polynomial of degree 3. Taking into account that is a probabilistic Hermite polynomial, it has a leading coefficient of 1. In this particular method, the
polynomials of degree 3 are the required ones for each field of the either two field or
three field beams. The polynomial H 3 (x) (as well as all H polynomials) is orthogonal
to its weight function which is described by equation 2.3 and equation 2.4:
w(x) = exp
Z

x2
2

Hm (x) Hn (x) w(x) dx = 0

(2.3)

(2.4)

They are orthogonal to the normal probability density function. They can be generated with the following exponential generation function 2.4. [7]

X
tn
t2
Hn (x)
exp(x t ) =
2
n!
i=0

(2.5)

Hermite polynomials usually satisfy the recurrence relation, which is used together
with the 0 and 1 polynomials to obtain the polynomials values as in formula 2.6.
Hn+1 (x) = x Hn (x) + n Hn1 (x)

(2.6)

An orthogonal basis of the Hilberts space of functions is obtained using the Hermite
polynomials satisfying the relation 2.7.
Z

|f (x)|2 w(x) dx

(2.7)

2.2
2.2.1

Basic statistics
Basic statistical parameters

The design values of the pressure coefficients are based on the mean value and standard deviation under the 78 percent fractile assumption.
mx =

N
1 X
xi
N i=1

N
1 X
x = [
(xi mx )2 ]0.5
N 1 i=1

10

(2.8)

(2.9)

3
x3

(2.10)

4
3
x4

(2.11)

1 =
2 =

For the given sets, the statistical parameters are computed using the following formulas for mean 2.8, standard deviation 2.9, skewness 2.10 and kurtosis 2.11. In equations 2.10 and 2.11 the main estimators are used, respectively 3-rd and 4-th central
moment. For these estimators, the formulas 2.12 and 2.13 has been used.
3 =

N
X
N
(xi mx )3
(N 1)(N 2) i=1

(2.12)

4 =

N
X
N 1
(xi mx )4
(N 2)(N 3) i=1

(2.13)

central moment
1-st
2-nd

describing characteristic
position
dispersion

3-rd
4-th

shape
peakedness

describing parameter
mean value x
variance or standard
deviation x
skewness 1
kurtosis 2

Table 2.2: Central moments and basic parameters


The higher central moments follow the formula 2.14, with which one could determine
the basic statistical parameters depending on the i-th value as can be noticed in
graph 2.2.
i =

(x x)i f (x)dx, i = 0, 2, 3, 4

(2.14)

As it can be shown in table 2.2, the statistical parameters describe a certain feature
of the set. The variance helps in establishing the spread of the numbers from the set.
In the particular case when 0 is meet, it indicates the term identity. If the variance
has an increased value then it is indicated that the points of the constitutive set are
very dispersed and tend to diverge starting from the mean value. The tendency of
the variance to 0 leads to the opposite conclusion, that all values are closed to mean
value and tend to it.
The skewness is used to compute the asymmetry of a certain type of distribution,
represented as a real value in regard to its mean. As it can be seen in figure 2.10,
depending on the value of the skewness, the following statements can be done:
1 0 The tail on the left side is longer, indicates a concentration of the mass
of the distribution to right.
1 = 0 The tails are equal, it can be considered as being a symmetric distribution.
11

1 0 The tail on the right side is longer, indicates a concentration of the


mass of the distribution to left.

Figure 2.10: Skewness variation


The kurtosis is a parameter which is used in identifying the peaks properties of a
certain distribution. It gives informations regarding peaks shape and curliness. For
different kurtosis parameters, the main existent shapes are shown in 2.11.

2
2
2

Figure 2.11: Kurtosis variation


One of the most used and important statistical terms is the median. It is defined
as being the mid-value of an set of data, separating the lower and upper half of a
probability distribution. It is usually obtained after sorting the values of a set in
12

descending order and picking the value situated at exactly middle point. In the case
of odd set, the median will be represented by two values.
The correlation coefficient , also known as the linear correlation coefficient, is a
measure of the statistical dependency. It measures the degree of correlation between
two variables and has the shape as it can be seen in the formula 2.16, with ma , mb
the mean values of a and b and a , b , the standard deviation of a and b.
2
ab
=

N
X
1
(ai ma ) (bi mb )
N 1 i=1

2
ab
a b
It has values between -1 and +1 and it can be explain as it follows:

ab =

(2.15)
(2.16)

Positive correlations - The values of a and b, two random variables are


strongly correlated. It is usually emphasised on graphs by their dependency,
namely increasing of the a value is characterised by increasing b value. Both
values tend to increase, when one of them increases. In the specific case when
the correlation factor reaches the +1 value, then the two values are directly
proportional.
Zero correlation - The two variables are random and a non linear, they do not
have any type of linear relationship between them.
Negative correlation - The values are indirect proportional, for an increasing a,
an decreasing b is obtained. For the particular case when the value -1 is meet,
then a perfectly negative fit is recorded.
The perfect fit correlation cases, weather +1 or -1, emphasize that the given points
are graphically situated on the same line, line which has the exact slope of the correlation coefficient.

2.2.2

Extreme value statistics

The extreme value statistics are used to be able to extrapolate specific values for determining the rare events. They are based on peak pressures and peak responses of
each individual run. The initial recorded data files, in number of 10, are the starting point for the extreme value statistics. Each of the files corresponds to 2 hour
measurements. The datas were split on only one hour of measurement, for each the
local maxima and minima were extracted. The local peeks were computed at each
of the 23 taps. For each specific hour, 23 times 2 tap extreme values were recorded.
The extremes are gathered and set up in matrices for the maximum and minima
set. For each set, the mean values, the standard deviations, the skewness and the
kurtosis are computed, using the formulas 2.9, 2.8, 2.10 and 2.11. The design values
of the pressure coefficients are based on the mean values and standard deviations
under the 78 percent fractile assumption. Two sets of surface pressure coefficients
are obtained, one time for the maximum and another set for the minimum obtained
parameters. The formula for computing the surface pressure used into computation
is shown in 2.17.

13

CPdesign = CPmeanvalue + CPstandarddeviation 0.636

(2.17)

For high order statistics, for each of the two maximum and minima peeks sets and
their correspondent mean values and standard deviations, one more time the computation of the statistical parameters: mean value, standard deviation, skewness and
kurtosis was performed. These specific values are required to be able to generate
inside the probability paper the normal distribution with its specific values. It also
provides useful information regarding the statistical stability and the fitting range.
For determination of the describing parameters of the probability distributions based
on data observations and reports, 4 approaches have been considered.
The first approach, which was also presented in the above sub-chapters, is called the
method of moments. It is based on the central moments which are shown in equation 2.14 and is a method of estimating the sets parameters. The computation of
the ensemble mean and standard deviation is done using Equations 2.8 and 2.9. For
computation of the skewness, the formula 2.10 has been used. The output obtained
by the method of moments is used in determining all other possible parameters for
every type of distribution,i.e. finding the shape and scale parameters of the Generalized Pareto distribution. The method conducts to biased estimators. In average will
lead to consistent parameters.
The second approach is the least square fit method.Is based in fitting the least square
in the probability paper, namely to find the best fit line into a specific data set. The
cumulative probability distributions equation must be converted in a such a matter
so that a linear expression for the observation variable is obtained. Firstly, the nonexceedance probability must be evaluated based on the sorting observations. The
values are ranked from 1 to N, where N is the ensemble size and the lowest value
receives rank 1. The general shape of the non exceedance probability is:
i
(2.18)
N +12
With between 0 and 0.5. Basically, what it can be noticed is that the accuracy of
the estimated parameters are dependent of the ensemble size. This means that for
a not sufficiently large ensemble size, the estimated values could differ from the real
ones. A popular value for is 0.44, for which the 50 percent true fractile values are
obtained. For different values, the method is biased, the median can be whether too
large or too small.
Mathematically, the least square fit method can be explained as follows: having N
sets of pair of numbers (x,y), characterized by the linear equation y = ax + b and the
error corresponding to the linear equation given in 2.19, the purpose is to find the
values of a and b which minimize the error.
frel (x xi ) =

E(a, b) =

N
X

(yn (axn + b))2

(2.19)

i=1

For the General probability density, the method is presented in the coming lines.
For each pair of parameters, the sums of the squared deviations are computed. The
input for this method are the results from the method of moments. In regard to this
results, the other sets of scale and shape factors are compared with the error sum.
At the end, the pair with the smallest error becomes the least-square solution.
14

The third procedure of identifying the unknown parameters of the probability distributions is the maximum likelihood method. Given a specific set of entry data with a
specific statistical model, the method selects the specific value of the model parameters which maximize the likelihood function. The likelihood function is obtained as
the product of the observed data ai with its corresponding density function:
L(m, , ) =

N
Y

fx (xi , m, , )

(2.20)

i=1

For simplicity and taken into account the monotony of the logarithm, the expression
in 2.21 is also used in computation.
l(m, , ) = ln(

N
Y

fx (xi , m, , )) =

i=1

N
X

ln(fx (xi , m, , ))

(2.21)

i=1

The maximum value of the logarithmic likelihood function is reached for 0 values of
the m, and parameters.
The last approach is the best linear unbiased estimator, also called the BLUE method.
It estimates the parameters as a weighted sum of the observations, having for each
parameter its correspondent weighted factor as given in 2.22 and 2.23.
m=

vi Ai

(2.22)

vi Bi

(2.23)

Given a data set x1 , x2 , ..., xN of N samplings and the probability density function fx ,
the unknown parameter can be defined as:
m=

n=N
X1

an x[n]

(2.24)

n=0

where an are the weighted parameters which have to be computed.

2.3
2.3.1

Wind climate
Basic population

the range of the wind velocities is known as the critical wind speed. The fitting of
the wind speed measurements in a plausible statistical pattern is of essential importance and is based on 2 arguments. First of all, fatigue failure from temporal
vortexes flows is influenced directly by the analysis of the load cycle data. The computed number of stress cycles supported by a structure in its lifetime is dependent
of the expected numbers of critical wind speed hours and wind climate. The second
aspect for which the wind speed data should be suited with a statistical model is the
importance of estimating the wind energy potential of a specific region. Basically,
two statistical models for the wind speeds are required. The first one for the medium
to low amplitudes of the wind speed and the second one for the extreme values. Usually the first model is needed for computing at the serviceability limit state while the
second model is used for the Ultimate limit state. The extreme value wind speed will
15

help in providing information when computing the specific design values in case of an
rare event. Furthermore, the critical wind speed, the number of stresses per working
life and the number of hours in this specific range is needed. Therefore, the wind
climate is one of the basic assets in helping obtaining a statistical model for the wind
speeds.
The range of amplitudes for low to moderate wind speeds forms the basic population.
It is characterized by normal behaviour of the wind flow masses, without exceptional
events. For the basic population, the Weibull distribution forms an appropriate
model. The probability density function and the cumulative probability functions
can be described as given in equation 2.25 and 2.26.
f (x) =

x k1
x k
k
(
)
exp((
) )
x0
x0
x0

(2.25)

x k
) )
(2.26)
x0
Where in the above equations, k and x0 are the scale and shape factors. Each hour is
not similar to another one and this is why 2 approaches have been taken into consideration when selecting the specific Weibull parameters. The first approach considers
the same value for the scale and shape factors for each year. The second approach
states that each year has different values for k and x0 . After performing the analysis, the best solution was chosen and the appropriate values have been taken. This
assumptions will be reviewed and discussed more in further chapters. One of the disadvantages of selecting the Weibull distribution as a model for the basic population
is that it fails in fitting the upper tail. Up to one specific point the Weibull distribution is valid and from that point on, the extreme values distributions will be used
in finding the extremes. Some specific limits were set, respectively 5 [m/s] the lower
limit to 20 [m/s] the upper limit. These specific values were set after performing simulations with different velocities. Up to the lowest value, the damage accumulation is
not sufficient enough for taking into computation. The top value was set after doing
a fit squaring of the a and b values in a 3-rd degree range polynomial. The a and b
values are needed when computing the Stress vs Number of cycles diagrams, which
will later be used inside the normal distribution. The normal distribution is used for
determining the intensity of the specific hours.
F (x) = 1 exp((

2.3.2

Storm

For extreme values of the wind speed, the extreme statistics must be applied for each
independent storm phenomenon, for frontal depressions and thunderstorms. The
exceedance probability of a reference wind speed is given as follows:
p(v > vref ) = 1

pk (v vref )

(2.27)

Where in the above equation, pk is the non-exceedance probability of one individual


storm event k. It can be stated that the exceedance probability is obtained as a combination of k events. As already mentioned above, a specific statistic model is chosen
for the extreme population, namely for the storm hours the Generalized Pareto distribution and for the number of storms per year, the Poisson distribution. The storm
climate is given here by 3 parameters:
16

nanz - number of storms per year = 2.


Vscale = 1.565655 [m/s].
Vshape = 0 [m/s].
For this specific case when Vshape takes the value 0, the Generalized Pareto is similar
with the Exponential distribution. The cumulative probability function and the
inverse of it are shown in 2.28, 2.28.
Fx = 1 exp(

xm
)

x = m ln(1 Fx )

(2.28)
(2.29)

Equation 2.29 is used for generating the maximum wind speeds per storm with m
and being the mean value and standard deviation. The non exceedance probability is considered 99.9 percent. Each storm has been evaluated to 3 hours, each of
them being simulated with 1.0, 0.97 and 0.93 percent intensity. The number of storm
hours per year is computed as given in formula 2.30. The total number of basic population wind hours per year is given by withdrawing from the total number of hours
365.25, the number of storm hours obtained from 2.30.
Nsth = Nstorm 3

(2.30)

Storms tend to occur in families, this means that usually the second or the third
storm of one year are more severe than the strongest storm of years with one storm.
There are also years when no storm occurrence is reported. This is one of the reasons
why the analysis is based on all extreme values.

2.4

Theoretical probability distribution

From stochastic point of view, the probability distributions are models which have
a specific shape and behaviour which characterise a specific set of random values
or experiments outputs, reducing them to some workable parameters. Trying to
reproduce the true behaviour of some natural or fictive processes using simulations,
you need to find a probabilistic model which allows you to describe the measurement
data in a scientific way, by transforming random variables to some representative
parameters.
A fundamental property of the distributions is that they can be univariate or multi
variate, this means that they either could describe a one parameter set or a multiparameter set. The last of them is of great help when describing events which are
constrained by a several number of factors.
The literature shows that one should be able to distinguish between the continuous
and the discrete (random) distributions, the first one being able to characterise long
and random sets of data, including intervals and floating points. The second one is
restricted by the limited number of the output, more precisely, the output is somehow expected as belonging to a certain expected set of values.

17

The most notorious discrete distributions are Bernoullis, e.g when trying to obtain
the probability of getting head when throwing a coin 10 times and Poissons probability distributions which is used for rare events and a great number of repetitions,e.g
how big is the probability of having three years without fog.
Between the most famous continuous distribution belong the Rectangular, when having a uniform distribution of the elements, the Triangular, the Beta, the Normal one
also known as the Gaussian distribution (the most used in mathematics, e.g. for a
cow which gives between 0 and 5 gallons of milk per day, telling the amount of milk
for a specific day), Log-Normal, the Exponential, the Weibull, the Pareto,e.g the
number of storm hours per storm and the Extremes. The 3 extreme value distributions are the Type I- Gumbel, Type II Frechet and Type III- The Reverse Weibull
distribution.

2.4.1

Poisson distribution

The Poisson distribution is one remarkable probability model in statistic and probabilistic science. It is usually used to describe random events in a certain time and
space limit, events which usually have a small probability of success. It is one discrete function with a high applicability
A couple of relevant examples are computing the chance to meet a bear when walking in the forest or the chances to meet a fox when driving near a hilly area, cars
arriving at a specific traffic light and .
In other words, this probability distribution is given by the probability density function f (x) as it can be seen in equation 2.31 :
f (x) =

X
e
X!

(2.31)

The cumulative probability distribution F (x) has the formula given in equation 2.32
F (x) =

k
e
kx k!
X

(2.32)

where
the mean value of the series m =
the variance 2 =
the skewness 1 =
the kurtosis 2 =

e is the Eulers number


k is the number of an events occurrence also known as the probability mass
function k = 0, 1, 2, ...

18

Properties
The expected values of such a distribution is equal to and the same apply for its
variance. The higher moments of the Poisson distribution are Touchard polynomials
in . A special case is when the expected value is equal to 1, for this particular case
the Dobinskis formula characterise the best the m-th moment which is equal with
the number of partitions of a set of m-size. The mode of a Poisson having a non integer is equal to a number which is the largest integer value less or equal to . When
having as the expected value of the Poisson distribution, the moment generating
function is given by: The moment-generating function of the Poisson distribution
with expected value is


E etX =

etk f (k; ) =

k=0

X
k=0

etk

k e
t
= e(e 1)
k!

(2.33)

The Poisson distributions are infinitely divisible probability distributions[11].

Figure 2.12: Poisson distributions for different values[11]


In figure 2.12 the graphical representation of the Poisson distribution is showed. It
can be noticed that when the rate of occurrence will be small, the likely probabilities
will be near zero line. When the rate is higher like 5 or bigger, the mid point of the
curve is moving to the right and the occurrence of 0 becomes more unlikely In the
current work, the Poission distribution is needed to be able to compute the number
of storms per year. For a large variation spectrum and a more accurate, closer to
reality simulation, the number of simulated years has been varied up to 500.

2.4.2

Normal distribution

The Normal distribution, also known as the Gaussian distribution is one of the most
often used statistical probability distributions with a large applicability in nature. It
is used when dealing with real valued random variables with unknown distributions,
describing the probability of real data sets to fit inside two real valued limits. Furthermore, it has a big applicability due to the central limit theorem, which states
that for a random generated independent set of given variables, the means of each independent observed variables will be normal distributed and they tend to converge to
a common value. It is given by the density function as it can be seen in the general
formula:
1 xm 2
1
f (x, m, ) = e 2 ( 22 )
(2.34)
2

In equation 2.34, m is the mean value, 2 is the variance and the term 1/ 2 ensures that the total area under the curve will reach the value 1. The normal distribution is used as a scale for skewness and kurtosis. In this thesis, the standard normal
19

distribution will be used. The standard approach is characterized by the mean value
of 0 and variance of 1.
1
1
2
f (x) = e 2 (xred )
2

(2.35)

Z
1 um 2
1
F (x) =
e 2 ( ) du
2

(2.36)

xm
(2.37)

The standard normal distribution function is described by the density function


as given in formula 2.35 and by the cumulative probability distribution as shown
in 2.36. Though, it must be mentioned that the cumulative function does not exist in
closed form because its integration limits. Therefore, u is fitted inside the equation in
order to be able to integrate. The "u" term is only a substitute.
For table plotting and for probability paper graphical representation, the reduced
variate is used as it is given in equation 2.37.
Some of the most important properties of the normal distribution are:
xred =

It is symmetric in regard to the value x=m, which is in the same instance median and mode
The density function has 2 inflection points situated at x = m and x = m+.
It is logarithmic concave and differentiable
From the total amount of samples, 68 percent will fall within 1, 95 percent
within 2, respectively 99.7 percent within 3 standard deviation of the mean.

(a) f(x)

(b) F(x)

Figure 2.13: Standardized normal distribution


The shape of the normal distribution probability density and cumulative probability
density is given in figure 2.13. In the current work, it is applied on the resistance
side, for the SN curves especially and to check and prove if some distribution terms
e.g. the k and x0 from the Weibull distribution, fit its pattern.

20

2.4.3

Weibull distribution

The Weibull distribution is a continuous probability distribution which is mostly


used in reliability and in life time data analysis due to its versatility and simplicity.
It can describe events like the hourly mean speeds during a year or during a decade.
There are many possible shapes of the Weibull distribution taken into account that
is dependent on 2 parameters, therefore its applicability. For specific values of the
Weibull distribution, one can obtain different other distributions like the Gaussian
distribution or exponential distribution. It is described by the density function from
equation 2.38 in the case when x 0. For the case when x = 0 the function density
function also equals 0.
f (x) =

k
x k1
x k
(
)
exp((
) )
x0
x0
x0

(2.38)

The cumulative probability distribution can be best described by equation 2.39:


x
F (x) = 1 exp(( )k )
(2.39)
x0
In formula 2.39 k is the shape parameter and x0 is the scale parameter. For the
above mentioned equalities both the shape as well as the scale parameters have to be
0. Technically speaking, being a 2-term dependent method it can be counted as
an interpolation between Rayleigh, for k=2 and exponential distribution for k=1. A
small classification can be done with respect to the k value as it follows
k < 1 indicates that the failure rate decrease over time.
k = 1 indicates that the failure rate remains constant over time. This characteristic is highlighting that due to external random events, the failure occurs
k > 1 indicates that the failure rate increases over time This feature is mostly
common meet at ageing processes

Statistical parameters description


The mean value has the general formula 2.40 and the variance can be expressed as in
equation 2.41
1
(2.40)
m = + (x0 ) (1 + )
k
2
1
2 = (x0 )2 ((1 + ) 2 (1 + )2 )
(2.41)
k
k
The skewness is given by equation 2.42


1 =

1+

3
k

3 3m 2 m3

(2.42)
3
where the standard deviation is denoted by . Where in the above given equations,
is the Gamma function, being mostly known as an extension of the factorial function.
For figure 2.14, = x0 , it can be noticed that for between 0 and 1 interval, the value
of the probability density function tends to infinity.
For the value k=1, the function tends to 1 .For k > 1, the density falls to zero as x
approaches zero from underneath, afterwards increases and decreases again.
21

2.5

I
II
III
IV

IV

2.0

=1
=1
=1
=1

k=0.5
k=1.0
k=1.5
k=5.0

1.5

II

1.0

III

0.5

0.0

0.5

1.0

1.5

2.0

2.5

Figure 2.14: Weibull distributions for different k and x0 values[11]

2.4.4

Generalized extreme value distribution

The Generalized extreme value distribution is the generalized form of three different
types of extreme distributions. It is dependent of the location, shape and scale parameter and according to these values, it can be classified in 3 different instances as
it follows:
Type I Gumbel-distribution with its Cumulative probability density:
F (x) = exp[exp([ +

(x m)

])]
6

(2.43)

Type II Frechet-distribution has the same shape as the Reverse-Weibull except


it takes values only for a negative value of the shape factor.
Type III Reverse Weibull-distribution which is described in following lines.
F (x) = exp[(f1 f2

xm 1
) ]

(2.44)

In the above given equation, f1 and f2 are the free terms which are given in
formula 2.45 with being the Gamma function.
f1 = (1 + )

f2 =

(1 + 2 ) f12

(2.45)

Any independent set of consistent data of identical distributed random numbers with
a maximum number of samples k, can be fitted using a generalized extreme value
distribution. The generalized extreme value distribution is a specific distribution
which, based on the theory of extreme values, combines all 3 possible extreme value
distributions. Hence it is often used when working with extreme processes, its biggest
applicability lies in modelling the maxima or the minima of long and finite series of
random values. The GEV is described by the Cumulative probability density:
1

F (x) = exp[(1 ky) k ] k 6= 0

(2.46)

F (x) = exp[exp(y)] k = 0

(2.47)

22

Figure 2.15: time history on a normal sheet


In the above mentioned equations, k is the shape factor which, depending on its
specific value can determine the exact type of extreme distribution. For k taking
negative values the GEV is the Frechet distribution, for k being 0, the Gumbel distribution is obtained and for k being larger than 0, the Weibull is obtained. A more
precise description can be noticed in graph 2.15. One could observe that the Type I
Gumbel distribution and the type II not have a upper boundary while, the Type III
is upper bounded. Type I and III have no inferior limit while, the Type II is in the
lower part bounded. The reduced variate can be described using the location, and
shape, parameters as it can be seen in 2.48.
xred =

2.4.5

(2.48)

Generalized Pareto distribution

The generalized Pareto distribution is a continuous distribution which is often used


in modelling the tails of other distribution. Its main purpose is to provide a reasonable fit to extremes of over complex data sets. This specific distribution can take a
multitude of possible shapes such as the Exponential and the Pareto distribution.
For manipulating sets of exceedance, each one of the above mentioned distributions
can be used.
The biggest advantages of the Pareto is the fact that it lets the input data to choose
the most appropriate distribution. Some of the most famous practical examples
for the GDPs are meet in finance: the stock returns, the $ report switches or in
telecommunication, changing files in size and not constant waiting times.
The cumulative distribution function is described in equation 2.49
F (x) = 1 [1

1
x xs
k] k
s

(2.49)

In the formula 2.49 xs is the limit value, s is the scale parameter and k is the shape
factor.
23

Depending on the shape parameter, the General Pareto distribution can be classified
in three forms:
k < 0 indicates a finite tail of the distribution.
k = 0 indicates that the tail is decreasing exponential and so an exponential
distribution is obtained.
k > 0 indicates a decreasing tail following a polynomial decrease. In this case
the distribution has a finite upper tail of xmax = xs + ks .
Between the mean value of the exceedance,mostly used as the mean value of x xs ,
standard deviation, the scale and the shape parameter the following relations have
been developed as it can be noticed in equations 2.50 and 2.51
m
s = 0.5 m (1 + ( )2 )
(2.50)

m
k = 0.5 m (1 ( )2 )
(2.51)

The General Pareto distribution will be used with the purpose of finding out the
number of storm hours for a specific storm climate.

2.5

Simulation techniques

2.5.1

Transformation method

Combination of random variables along with complex expressions for the probability density functions or for the cumulative probability densities can lead to failure.
Therefore, the simulations are required.
The Monte Carlo simulations are performed for solving complex integrals and to sample random variables. The core of the simulations is the random number generator
which generate statistical independent random numbers in the [0,1] interval. This
numbers are the basic input for the different simulation methods which will be presented. As a random number generator the Mersenne-Twister algorithm given by
[Matsumoto and Nisimura, 1998].
Having high quality random numbers, the procedure of producing results which simulates the normal, Poisson, Beta and Parreto distribution is fulfilled.
The transformation method is one of the most used simulation methods. It uses the
inverse of the cumulative probability density, therefore the non inverse functions used
for the cumuli can not be used in this method.
The procedure as it shown in figure 2.16 and can be summarized as follows: Being
given a probability density function f(x) with < x < and its corresponding
cumulative probability density F(x)s, assume that there is a u, generated uniformly
in the (0,1) interval. If x takes any random value, excepting the end points, a unique
x from the probability density f(x) can be computed using formula 2.52 and 2.53.
u = F (x)

(2.52)

x = F 1 (u)

(2.53)

The algorithm is very helpful when the inverse function can be by hand calculated.
24

Continuous distribution

F(x)
u

-1

x=F (u)

Discrete distribution

F(x)
u

f (x)

0
x(k) x(k+1)

Figure 2.16: Transformation method - approach

2.5.2

Acceptance-rejection method

In the case of very complex or unknown cumulative probability distributions, the


transformation method will not work, mostly because equation 2.53 would not have
a solution. Therefore, the second method is introduced, which states: assuming that
for any value of x, the density f(x) can be computed and the shape of f(x) is known
precisely enough to be contained inside a shape which is C times r(x), than C
r(x) f (x). Here, r(x) is a uniform or a normalized sum of uniform distributions.
An illustration of the methods algorithm is illustrated in figure 2.17.

C r(x)

f(x)

x
Figure 2.17: Acceptance-rejection method
The algorithm starts with generating a x value in accordance to r(x). Afterwards,
25

the f(x) value and the C r(x) are computed so that one could compare uCr(x)
f (x), where u is a random generated number. If the inequality is respected, accept x,
otherwise reject it.

2.5.3

Box-Muller

The Box-Muller method is a procedure which using as input, random numbers between 0 and 1, generates pairs of independent, normally standardized distributed
values. The algorithm was developed as an more efficient alternative for the Transformation method. The theory presents the method as follows: Being given x and y,
two uniformly distributed numbers, y (0, 1] and x (0, 1] then A and B are independent random variables normally distributed (with 0 as mean and 1 as standard
deviation) if:
A = R cos() =
B = R sin() =

2 ln x cos(2y)

2 ln x sin(2y)

(2.54)
(2.55)

The general formulation has its difficulties when implementing and can have numerical stability problems when x takes values near 0. Therefore, more computational
convenient is the Polar shape of the Box-Muller proposed by J.Bell and R.Knop.

-1

-1

Figure 2.18: Box-Muller method


Being given 2 independent and uniformly distributed pair of numbers u and v in the
[-1,+1], the formula of the circle is computed using:
R 2 = u2 + v 2 = s
In the cases when s = 0 and s 0, a new pair of (u,v) is chosen. Having u and v
uniformly distributed and being also the points of a circle implies that also "s" values
are uniformly distributed between 0 and 1. As it can be seen in figure 2.18, using
the values of A and B given above,
the following substitutions
can be made: s = A,

B = /2, cos = u/R = u/ s and sin = v/R = v/ s. Starting from the basic
form formulation and using the above mentioned substitutions, the final equations of
the polar approach, namely the standard normal deviates, are obtained, as shown in
formula 2.56 and 2.57.

26

2ln(s)
s

(2.56)

2ln(s)
s

(2.57)

a=u
s

b=v

2.5.4

Allocation method

The allocation method allocates a specific range of non-exceedance probabilities to a


target event. Using Poisson distribution with its probability density function as given
in formula 2.31 and the mean value of the sample = 2, the number of hours per
storm with different intensities is generated.
To each storm, 3 hours are allocated and a number of maximum hourly wind speed
is generated using the Generalized Pareto. Each hour of storm manifests different
intensities, namely, first storm hour 100 percent, second 97 percent and last with 93
percent. In total, per year, the number of storm hours will be equal with 3 Nstorms .

2.5.5

Correlated random variables

Correlation can be defined as a statistical dependency between two random sets X


and Y. The dependency can be represented as any statistical relationship between
the sets. As a measure of the correlation, one of the most used approaches is the
Pearsons correlation which is limited in the [-1,1] interval.
The +1 value, represents a perfectly linear direct increasing relationship while -1
represents a perfectly linear direct decreasing relationship (inverse) of the X and Y
sets. In the specific case when 0 is meet, a perfect uncorrelated relation characterize
the two sets. The positive/negative scatter can be seen in 2.19. One can say that
X and Y are independent. As the Pearsons coefficient tends to the margins of the
interval, a more stronger correlation can be noticed.
2
The general Pearsons correlation is given in 2.58, where xy
, also known as covariance is given by the formula 2.59.
2
xy
x y

(2.58)

N
1 X
(xi mx )(yi my )
N 1 i=1

(2.59)

xy =

2
=
xy

In the general Pearsons formulation, N is the number of samples of each set, mx and
my are the mean values of the measurements and x , y are the standard deviations.
Between the most important properties of the correlation, the symmetry can be
counted: xy = yx.
For the Pearson coefficient to be computed and to take proper values, the standard
deviations of the sets must be finite and different from 0. Otherwise, values grater
than 1 could be recorded, which contradict the initial assumptions.

27

positive
correlation

negative
correlation

y
zero correlation

Figure 2.19: Correlation coefficient

2.6
2.6.1

Damage accumulation
Introduction to fatigue analysis

Fatigue is defined as the process of losing the materials proprieties (weakening of


the material) due to repeatedly applied forces. It is a progressive structural change
due to fluctuations of the stresses and strains at some specific points inside the element which can culminate in cracks after a sufficiently big number of repetitions.
If the elastic limit of the material is not reached (the given loads maximum stress
is smaller then the ultimate limit stress) the material returns to its initial shape.
The process can be repeated many times in the elastic range but when it comes
to huge number of repetitions of the same loading/unloading process the yielding
point changes. In such cases, the crack or the rupture can occur at significantly lower
stress values than the breaking strength. Every material is subjected to fatigue due
to the repeated cyclic loading in time (e.g. loading and unloading).
Some of the most important characteristics leading to fatigue are summarized. Microscopic discontinuities and design features which are causing stress concentrations
are the locations where the fatigue starts. Furthermore, fatigue is a stochastic dependent process (it has a certain degree of randomness). It is more often reported
to be associated with tensile stresses than with compressive ones. Additionally, damage is a cumulative process when related with fatigue. The materials do not recover
completely when resting. However, fatigue is influenced by many factors: corrosion
factors, oxidizing factors, presence of sulf/ potassium in air, residual stresses, presence of chemicals in the atmosphere, temperature conditions. One of the most important characteristics is the fact that in metal alloys the process starts with some
dislocation movements which transform later in the cracks core.
Almost every type of material is subjected to fatigue and therefore, it must be taken
into consideration when projecting a structure( buildings, machines, aerospace industry), especially when estimating the life time of a sample. It has to be noted that
fatigue is a stochastic process and the afferent calculation are not 100 percent accurate.
28

FATIGUE

MATERIAL
PROPERTIES

STRESS
ANALYSIS

CUMULATIVE
DAMAGE

EXTERNAL
FACTORS

Figure 2.20: Basic fatigue design components


As shown in Figure 2.20 fatigue is in a close relationship with the material proprieties, stress analysis and external factors which all contribute to the accumulating
damage of the structure.
Material properties
From durability requirements it is of essential importance to be aware of the relation between stress, strain and fatigue for the material that is taken into computation/analysis. The fatigue is known to be dependent of the stresses and strains in the
critical points of the structure like corners, notches.
Stress analysis
Every element of a structure is made of a specific material and is subjected to stress
analysis. Through it, each sample is computed at the ultimate limit state. The
boundary conditions, the initial conditions and the shape of the component are the
input parameters for stress analysis and depending on the loading cases the structural responses are obtained.
External factors
External factors as previously mentioned, can influence the structure directly and
indirectly. Directly through eventual direct loads which may act on the structure and
indirectly by contributing in time to the fatigue process. One of the most dominants
failure mode is fatigue and despite the improvement of the analysing techniques and
of the research efforts in this field it is further very hard to predict.

2.6.2

Deterministic approach Eurocode

Depending on the number of cycles experienced by a structure fatigue can be classified into Low-Cycle-Fatigue and High-Cycle-Fatigue, each with its own set of characteristics. LOW cycle fatigue is characterized by a number of repetitions less than 104
and by the Coffin-Manson equation, as shown in 2.60.

= 0f (2N )c
2
29

(2.60)

With
N - number of cycles
0f - fatigue ductility coefficient
c- Fatigue ductility exponent (between -0.5 to -0.7 for metals)

-the plastic strain amplitude
2
High cycle fatigue is characterized by a number of repetitions greater than 104 and
by the material behaviour, which in the first phase acts in the elastic domain. For
predicting the lifetime of a sample the S-N curves were introduced (stress vs. number
of cycles curves) which are also known as Wohler curves or Wohler lines.
In order to determine the lifetime of a structure/element it is required to identify the
number of complete loading cycles which are sustained by a sample before loosing its
structural characteristics. The best predictions are obtained using the cycle counting methods which pair the local minima and maxima. One of the most accurate
methods, very used in the last 3-4 decades is the Rainflow counting method. This
algorithm is based on application of the Miners rule.
The Miners rule is based on the A.Palmgren algorithm and states : for k different
stress magnitudes from a spectrum Si(1 i k) each contributing ni cycles, if N i
is the number of cycles to failure of a constant stress Si , failure takes place when in
Equation 2.61 C is between 0.7 and 2.2.
ni
)=C
(2.61)
Ni
For design it is always taken 1. The rule is useful in many circumstances but has
some limitations. The first limitation is that it fails to recognize the probabilistic
nature of the fatigue namely, the sequence effect is not considered. The second limitation applies for some specific cases, where cycles caused by low stress followed by
high stress are not well predicted by the law. Furthermore, it states that damage accumulation does not depend on stress level. Despite its limitations, the Miners rule
is the most used damage accumulation model leading to failure from fatigue.
X

2.6.3

Random SN curves

The modern study of fatigue is based on the work of the German railway engineer
Augustin Whler, a technologist which developed his studies in the mid 19-th century. His research was focused on the failure of the axles after different service periods at relatively small loads. As a conclusion of his work he summarize that the
cycle amplitude loading is much more important than the peak loading. Following
his initial studies and after his work was continued, the SN curves were adopted and
developed for different materials and for different number of loading cycles.
The SN curves also known as Whler Lines are diagrams which represent for a specific material its behaviour based on the number of loading cycles and the amplitudes magnitude.It is nevertheless an empirical mean of multiplying the fatigue
process and designing against it.[12].
One of the most important purposes of these diagrams is that it helps establishing
the number of loading cycles (N) which one member can bear until a samples collapse under a cyclic load(S) which is much smaller then the limit stress, such as it
can be seen in figure 2.31. The abscissa is often plotted logarithmically because usually a great number of repetitions are required until damage is recorded. For the
ferrous materials like Steel an endurance limit exists, limit which separate the point
30

Figure 2.21: SN curves for steel and aluminium [12]


of accumulating damage namely under this specific point, failure does not occur. The
above mentioned property does not apply for aluminium, material which accumulates
damage and for which each cycle contributes.
One of the purpose of this work is to obtain and accumulate damage from fatigue
by simulating a 50 years wind and storm scenario for normal and under designed
structures. To be able later to do these steps the Whler lines are needed for the
material so that one could see and compute the damage accumulation.
The SN curves are highly connected to the probability of failure (pf ) which is also
one of the outputs after one specific number of cycles at one amplitude level. Being
a probability it follows also one type of distribution namely the Weibull probability
distribution.

2.7

Rainflow counting method

The capacity to use a numerical method must be measured in the direction that it
reduces the data volume without creating false quantities of strength. For analysing
and counting the loading cycles which lead to fatigue damage, the counting methods have been introduced. All numerical methods have in common the following
features[8]:
Decomposition of the determined stress time courses in the connected reversal
points.
The definition of a later damage rating of the elementary event should be recognised and counted.
The determination of the event parameters: class limit exceedance, peak values
or stress value of the turning points, the area between the two points, namely

31

the stress difference between the two points with the stress mean value and the
closing cycles or stress loops.
Formulation of an algorithm which, defines the events through stress recognition, keeps the values of the parameters and the order of the procedure. Except
a couple of numerical methods that lead to unsatisfactory life predictions, the
two parameter methods register and reflect the cyclic strain stress behaviour of
the materials[8].
Some of the most representative methods are: Peek counting method, Level crossing
counting method and Rainflow counting method. The latter was first time introduced in 1968 by Tatsuo Endo and M. Matsuishi with the purpose to calculate the
half cycles or the complete cycles of strain time signals. The method is created and
based on Endos analogy between rain drops which are falling from a Pagodas roof(
Pagoda - Japanese building with many eaves) and the generation of hysteresis loops.
It is made an analogy between the simple straining pattern history and the roof. The
Rainflow counting method is used for analysing the fatigue data so that one is able
to reduce the varying stress spectrum into a set of more simple stress reversals. A remarkable property of the above mentioned method is that it allows the computation
of fatigue cycles from a specific time history.
To the numerical methods basis belong the local stress strain path. The path will be
simulated assuming a material behaviour model which identifies important elements
as follows: The massing behaviour of a material is characterized by the shape of the
hysteris loop nests which corresponds to the doubled stress-strain initial curve. The
second remarkable aspect is the material memory. The material shows a remembering capacity which can be classified in 3 types.[8]
The memory 1 actions when after closing a hysteris loop who would have been
started from the first initial loading curve, the stress-strain path follows afterwards
the initial loading curve. The memory 2 starts when after closing a hysteris loop
which has started from the loop arm, the stress strain path follows the initial arm
of the loop. The memory type 3 manifests when the hysteris loop arm which has
started from the initial loading curve ends as soon as its starting point stress/strain
in the opposite quadrant is reached. The types of model descriptions of the cyclic
behaviour characterize best the metallic materials like steel. They are described for a
bilinear stress strain law after the kinematic assumption of Prager.[8]
Rainflow Algorithm
The numerical method can be used not only for counting of the cycles and of the
stress-strains, but also for counting the bending and respectively deformation of the
elements. The RCM recognises turning points of the stress path, counts closed paths
and stores the unfinished paths in a residuum value. The closed loops are stored in
a matrix, which is characterized by a class differentiation of the cycles width and
height. Furthermore, the method brings back the order of the turning points after
all uncompleted loops are removed and stored into the residuum. The residuum
is composed of two parts. The first one is made up of loops of the initial loading
curve sections and not closing hysteris loop parts. The second part contains loops
which are suitable to close and not have been closed because of the path ending. The

32

classes of the counting matrix were chosen in such a manner, that they reflect in the
best way each cycles properties.
Algorithm steps and practical example
Given a time history with amplitudes, the algorithm reduces the time history to
an alternation/sequence of peeks (tensile) and valleys (compressive). The specific
methods algorithm can be summarized as follows:
1. The time history can be imagined as a pagoda or a template for a normal sheet.
2. Turn the sheet to 900 having the earliest time step at the top.
3. The procedure considers each peak as a source of water. From each peak the water swipes down following exactly the so called pagoda roof.
4. Each half cycle must be counted taking into account flow terminations as follows:
a) The flow reaches the end of the time history.
b) The flow from a specific peak meets another flow which has the source point an
earlier peak.
c) It meets an opposite flow having a greater magnitude.
5. The above mentioned step is done 2 times: one time for the tensile peaks and the
second time for the compressive valleys. Usually, for the tensile peaks, the water
swipes from right to left and for the valleys the water swipes down from left to right.
6. After each half cycle is determined and measured, the magnitude is recorded. It is
usually computed as being the stress difference between start and termination.
7. A table must be done where each half cycles path and magnitude need to be
noted. The half cycles with the same magnitude and opposite sense are paired-up
and represent a complete cycle.
8. In the last step the number of complete cycles are counted as well as the remaining half cycles, the remaining are called residuals.
PRACTICAL EXAMPLE
Given the time series in figure 2.22, the procedure is explained for a better understanding of the method. The paper is turn to 90 degrees, having the earliest time
step at top as it can be seen in 2.23. Next, the water will start dropping following
the exact shape of the pagoda, as it is pointed in figure 2.23.
After having an ensemble look of the paths created, the results can be summed up
in the table 2.3 and sorted out based on their paths, cycles and amplitudes. A final
counting out of the stress cycles is realized, arranging them after the stress magnitudes of the cycles, as the table 2.4 highlights. As a conclusion of the given example,
one could easily see that as the total number of closed cycles is 2 and the remaining
half cycles or uncompleted paths are stored in the residuals.

33

6
4

Stress

2
0
-2 0

10

12

-4

-6
-8

Time

Figure 2.22: time history on a normal sheet

Stress
2 0

Stress
2 0

O
A

2
2

Time

Time

10

10

G
H
I

12

12

(a) 90 degree turned skatch

(b) Water drops folowing the pagoda


roof

Figure 2.23: Rainflow

34

Path
O-A
A-D
B-C
D-E
E-H
F-G
H-I

Rainflow cycles by paths


Cycle
Stress amplitude
0.5
3
0.5
9
1
7
0.5
5
0.5
7
1
8
0.5
13

Table 2.3: Rainflow cycle counting and stress analysis

Stress amplitude
13
9
8
7
5
3

Total cycles counting


Total Cycles
Paths
0.5
0.5
1
1.5
0.5
0.5

H-I
A-D
F-G
B-C,E-H
D-E
O-A

Table 2.4: Rainflow complete cycles final counting

35

Potrebbero piacerti anche