Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
structure
Vlad Leontie,
Ruhr University Bochum,
EKIB Departament,
M.Sc. Computational Engineering
11.15.2014
Abstract
How to evaluate damage from fatigue for optimum design structures and for under
designed structure
Chapter 1
Introduction
Chapter 2
Theoretical basis
2.1
2.1.1
Wind tunnel experiments have been performed using the wind tunnel of the Department of Civil and Environmental Engineering Sciences at the Ruhr University
Bochum. The physical model used for experiments was a simple block shaped building (cuboid) with no openings or additional elements as it can be seen in Figure 2.1.
The dimensions of the cuboid h/d/l follow the edges ratio 0.6/1.0/2.0 corresponding
to 94/156/312 [mm].
l
0
90
h
d
0
Figure 2.1: Analysed model with its specific parameters,wind tunnel flow directions
The surface pressures due to wind action are monitored inside the tunnel at 4 different cross-sections which are positioned at 7.8, 39, 78 and 156 [mm] with respect
to the front edge, the latter corresponding to the position at the centre bay. Each
section is equipped with 23 taps which are connected to pressure sensors using 300
mm length tubes. The arrangement of the tabs and their specific location is shown in
Figure 2.2
Inside the tunnel pressures fluctuations occur, which can affect and influence the
accuracy of the analysis.
The fluctuations inside the channel can be classified in:
2
156 [mm]
4 10 15 15
15
19
19
15
15 15 10 4
10
10
15
15
15
94 [mm]
94 [mm]
15
20
20
20
20
10
10
2.1.2
The purpose of the thesis is to evaluate damage from fatigue due to wind induced
loads for well designed structures and for under-design structures. Load cycles must
be computed in order for the structure to accumulate damage. The surface pressures
and the structural responses of the trapezoidal sheeting elements are also one of the
study focus. From the sheeting elements point of view, two approaches have been
considered:
Figure 2.3: Wind tunnel of the RUB sketch, positioning of the motor and fan
Span 1
Span 1
Span 2
Span 2
Span 3
2
1
7
17
12
18
1
23
2.1.3
Structural responses
Influence lines
The lines of influence are defined for a given action such axial force, shear force or
bending moment and are graphs which illustrate the functions variation at a given
point on a structure due to the application of a load with unit value on the element.
The influence lines vary and depend on the type of action which is applied: axial,
bending or shear. They are usually generated by applying independently at several
points of a structure the unit load and determining the functions value due to this
load at the other points (all points except the point where the action was applied).
The load is repeated at all the required points and the calculated values at each
point are connected together for the influence lines generation.[5] The lines can be
used for concentrated or for distributed loads. For the concentrated loads, a force
with unity value is moved along the element at specific points with a relatively dense
repositioning. The spectrum must be very dense so that one could be able to plot
the exact graph of the function. For the distributed loads, the procedure is repeated
but instead of one single concentrated load, a distributed load with unity value and
specific width is positioned at different stations along the element. Both types, the
concentrated loads and the distributed loads follow the same pattern. The effect of
the distributed loads can be obtained based on the concentrated loads example, only
integrating the influence line of the concentrated force at the end over the length.
The influence lines have a very important role when designing the elements, as they
are able to show the maximum expected responses of the structure members and ensure that one specific member will not fail in some given circumstances. They help in
5
M2org
1.4
1.2
1.2
1.0
1.0
0.8
0.8
influenceacoefficient
influenceacoefficient
M105
1.4
0.6
0.4
0.2
0.0
-0.2
0.6
0.4
0.2
0.0
-0.2
-0.4
-0.4
-0.6
-0.6
'1'
-0.8
0.0
0.5
1.0
1.5
2.0
-0.8
Spana2
0.0
0.5
1.0
1.5
V1
V1
1.4
1.4
1.2
1.2
1.0
1.0
0.8
0.8
0.6
0.6
YaAxisaTitle
YaAxisaTitle
2.0
0.4
0.2
0.4
0.2
0.0
0.0
-0.2
-0.2
-0.4
-0.4
-0.6
-0.6
-0.8
-0.8
0.0
0.5
1.0
1.5
2.0
0.0
0.5
1.0
1.5
2.0
M205
M2org
1.4
1.2
1.2
1.0
1.0
1.0
0.8
0.8
0.8
0.6
0.4
0.2
0.0
-0.2
-0.4
influencepcoefficient
1.4
1.2
influencepcoefficient
influencepcoefficient
M105
1.4
0.6
0.4
0.2
0.0
-0.2
-0.4
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.2
0.0
-0.6
-0.8
-0.8
0.4
-0.2
-0.4
-0.6
-0.6
0.6
-0.8
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
1.5
2.0
2.5
V1
V2
1.4
1.2
1.2
1.0
1.0
'1'
0.6
0.4
0.2
... ...
Spanp1
Spanp2
0.0
-0.2
-0.4
Spanp3
influencepcoefficient
1.4
0.8
YpAxispTitle
3.0
0.8
0.6
0.4
0.2
0.0
-0.2
-0.4
-0.6
-0.6
-0.8
-0.8
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
XpAxispTitle
0.5
1.0
1.5
2.0
2.5
3.0
System
S1
S2
S3
S4
S5
S6
S7
S8
S9
S10
S11
S12
S13
S14
S15
S16
S17
S18
S19
IV
III
18
19
II
20
3
2
21
22
23
d
pn (x)
dx
obeying the umbral calculus [6]. The umbral calculus can be described as the similarity between the apparent unrelated polynomial equations and the drawing techniques
used to represent them or as the classification of almost all classical combinatorial
identities for polynomial sequences.
The Hermite polynomials are defined as in formula 2.1:
x2
x2
dn
exp 2
dx
(2.1)
They are also known as the probabilists Hermite polynomials because the term from
equation 2.2 is the probability density function for a normal distribution with the
expected value of 0 and standard deviation of 1.
x2
1
= exp 2
2
(2.2)
In Figure 2.9 the graph of the first six Hermite polynomials is represented with re-
H2 (x) = x2 1
H3 (x) = x3 3x
H4 (x) = x4 6x2 + 3
H5 (x) = x5 10x3 + 15x
Proprieties
H 3 (x) is a polynomial of degree 3. Taking into account that is a probabilistic Hermite polynomial, it has a leading coefficient of 1. In this particular method, the
polynomials of degree 3 are the required ones for each field of the either two field or
three field beams. The polynomial H 3 (x) (as well as all H polynomials) is orthogonal
to its weight function which is described by equation 2.3 and equation 2.4:
w(x) = exp
Z
x2
2
(2.3)
(2.4)
They are orthogonal to the normal probability density function. They can be generated with the following exponential generation function 2.4. [7]
X
tn
t2
Hn (x)
exp(x t ) =
2
n!
i=0
(2.5)
Hermite polynomials usually satisfy the recurrence relation, which is used together
with the 0 and 1 polynomials to obtain the polynomials values as in formula 2.6.
Hn+1 (x) = x Hn (x) + n Hn1 (x)
(2.6)
An orthogonal basis of the Hilberts space of functions is obtained using the Hermite
polynomials satisfying the relation 2.7.
Z
|f (x)|2 w(x) dx
(2.7)
2.2
2.2.1
Basic statistics
Basic statistical parameters
The design values of the pressure coefficients are based on the mean value and standard deviation under the 78 percent fractile assumption.
mx =
N
1 X
xi
N i=1
N
1 X
x = [
(xi mx )2 ]0.5
N 1 i=1
10
(2.8)
(2.9)
3
x3
(2.10)
4
3
x4
(2.11)
1 =
2 =
For the given sets, the statistical parameters are computed using the following formulas for mean 2.8, standard deviation 2.9, skewness 2.10 and kurtosis 2.11. In equations 2.10 and 2.11 the main estimators are used, respectively 3-rd and 4-th central
moment. For these estimators, the formulas 2.12 and 2.13 has been used.
3 =
N
X
N
(xi mx )3
(N 1)(N 2) i=1
(2.12)
4 =
N
X
N 1
(xi mx )4
(N 2)(N 3) i=1
(2.13)
central moment
1-st
2-nd
describing characteristic
position
dispersion
3-rd
4-th
shape
peakedness
describing parameter
mean value x
variance or standard
deviation x
skewness 1
kurtosis 2
(x x)i f (x)dx, i = 0, 2, 3, 4
(2.14)
As it can be shown in table 2.2, the statistical parameters describe a certain feature
of the set. The variance helps in establishing the spread of the numbers from the set.
In the particular case when 0 is meet, it indicates the term identity. If the variance
has an increased value then it is indicated that the points of the constitutive set are
very dispersed and tend to diverge starting from the mean value. The tendency of
the variance to 0 leads to the opposite conclusion, that all values are closed to mean
value and tend to it.
The skewness is used to compute the asymmetry of a certain type of distribution,
represented as a real value in regard to its mean. As it can be seen in figure 2.10,
depending on the value of the skewness, the following statements can be done:
1 0 The tail on the left side is longer, indicates a concentration of the mass
of the distribution to right.
1 = 0 The tails are equal, it can be considered as being a symmetric distribution.
11
2
2
2
descending order and picking the value situated at exactly middle point. In the case
of odd set, the median will be represented by two values.
The correlation coefficient , also known as the linear correlation coefficient, is a
measure of the statistical dependency. It measures the degree of correlation between
two variables and has the shape as it can be seen in the formula 2.16, with ma , mb
the mean values of a and b and a , b , the standard deviation of a and b.
2
ab
=
N
X
1
(ai ma ) (bi mb )
N 1 i=1
2
ab
a b
It has values between -1 and +1 and it can be explain as it follows:
ab =
(2.15)
(2.16)
2.2.2
The extreme value statistics are used to be able to extrapolate specific values for determining the rare events. They are based on peak pressures and peak responses of
each individual run. The initial recorded data files, in number of 10, are the starting point for the extreme value statistics. Each of the files corresponds to 2 hour
measurements. The datas were split on only one hour of measurement, for each the
local maxima and minima were extracted. The local peeks were computed at each
of the 23 taps. For each specific hour, 23 times 2 tap extreme values were recorded.
The extremes are gathered and set up in matrices for the maximum and minima
set. For each set, the mean values, the standard deviations, the skewness and the
kurtosis are computed, using the formulas 2.9, 2.8, 2.10 and 2.11. The design values
of the pressure coefficients are based on the mean values and standard deviations
under the 78 percent fractile assumption. Two sets of surface pressure coefficients
are obtained, one time for the maximum and another set for the minimum obtained
parameters. The formula for computing the surface pressure used into computation
is shown in 2.17.
13
(2.17)
For high order statistics, for each of the two maximum and minima peeks sets and
their correspondent mean values and standard deviations, one more time the computation of the statistical parameters: mean value, standard deviation, skewness and
kurtosis was performed. These specific values are required to be able to generate
inside the probability paper the normal distribution with its specific values. It also
provides useful information regarding the statistical stability and the fitting range.
For determination of the describing parameters of the probability distributions based
on data observations and reports, 4 approaches have been considered.
The first approach, which was also presented in the above sub-chapters, is called the
method of moments. It is based on the central moments which are shown in equation 2.14 and is a method of estimating the sets parameters. The computation of
the ensemble mean and standard deviation is done using Equations 2.8 and 2.9. For
computation of the skewness, the formula 2.10 has been used. The output obtained
by the method of moments is used in determining all other possible parameters for
every type of distribution,i.e. finding the shape and scale parameters of the Generalized Pareto distribution. The method conducts to biased estimators. In average will
lead to consistent parameters.
The second approach is the least square fit method.Is based in fitting the least square
in the probability paper, namely to find the best fit line into a specific data set. The
cumulative probability distributions equation must be converted in a such a matter
so that a linear expression for the observation variable is obtained. Firstly, the nonexceedance probability must be evaluated based on the sorting observations. The
values are ranked from 1 to N, where N is the ensemble size and the lowest value
receives rank 1. The general shape of the non exceedance probability is:
i
(2.18)
N +12
With between 0 and 0.5. Basically, what it can be noticed is that the accuracy of
the estimated parameters are dependent of the ensemble size. This means that for
a not sufficiently large ensemble size, the estimated values could differ from the real
ones. A popular value for is 0.44, for which the 50 percent true fractile values are
obtained. For different values, the method is biased, the median can be whether too
large or too small.
Mathematically, the least square fit method can be explained as follows: having N
sets of pair of numbers (x,y), characterized by the linear equation y = ax + b and the
error corresponding to the linear equation given in 2.19, the purpose is to find the
values of a and b which minimize the error.
frel (x xi ) =
E(a, b) =
N
X
(2.19)
i=1
For the General probability density, the method is presented in the coming lines.
For each pair of parameters, the sums of the squared deviations are computed. The
input for this method are the results from the method of moments. In regard to this
results, the other sets of scale and shape factors are compared with the error sum.
At the end, the pair with the smallest error becomes the least-square solution.
14
The third procedure of identifying the unknown parameters of the probability distributions is the maximum likelihood method. Given a specific set of entry data with a
specific statistical model, the method selects the specific value of the model parameters which maximize the likelihood function. The likelihood function is obtained as
the product of the observed data ai with its corresponding density function:
L(m, , ) =
N
Y
fx (xi , m, , )
(2.20)
i=1
For simplicity and taken into account the monotony of the logarithm, the expression
in 2.21 is also used in computation.
l(m, , ) = ln(
N
Y
fx (xi , m, , )) =
i=1
N
X
ln(fx (xi , m, , ))
(2.21)
i=1
The maximum value of the logarithmic likelihood function is reached for 0 values of
the m, and parameters.
The last approach is the best linear unbiased estimator, also called the BLUE method.
It estimates the parameters as a weighted sum of the observations, having for each
parameter its correspondent weighted factor as given in 2.22 and 2.23.
m=
vi Ai
(2.22)
vi Bi
(2.23)
Given a data set x1 , x2 , ..., xN of N samplings and the probability density function fx ,
the unknown parameter can be defined as:
m=
n=N
X1
an x[n]
(2.24)
n=0
2.3
2.3.1
Wind climate
Basic population
the range of the wind velocities is known as the critical wind speed. The fitting of
the wind speed measurements in a plausible statistical pattern is of essential importance and is based on 2 arguments. First of all, fatigue failure from temporal
vortexes flows is influenced directly by the analysis of the load cycle data. The computed number of stress cycles supported by a structure in its lifetime is dependent
of the expected numbers of critical wind speed hours and wind climate. The second
aspect for which the wind speed data should be suited with a statistical model is the
importance of estimating the wind energy potential of a specific region. Basically,
two statistical models for the wind speeds are required. The first one for the medium
to low amplitudes of the wind speed and the second one for the extreme values. Usually the first model is needed for computing at the serviceability limit state while the
second model is used for the Ultimate limit state. The extreme value wind speed will
15
help in providing information when computing the specific design values in case of an
rare event. Furthermore, the critical wind speed, the number of stresses per working
life and the number of hours in this specific range is needed. Therefore, the wind
climate is one of the basic assets in helping obtaining a statistical model for the wind
speeds.
The range of amplitudes for low to moderate wind speeds forms the basic population.
It is characterized by normal behaviour of the wind flow masses, without exceptional
events. For the basic population, the Weibull distribution forms an appropriate
model. The probability density function and the cumulative probability functions
can be described as given in equation 2.25 and 2.26.
f (x) =
x k1
x k
k
(
)
exp((
) )
x0
x0
x0
(2.25)
x k
) )
(2.26)
x0
Where in the above equations, k and x0 are the scale and shape factors. Each hour is
not similar to another one and this is why 2 approaches have been taken into consideration when selecting the specific Weibull parameters. The first approach considers
the same value for the scale and shape factors for each year. The second approach
states that each year has different values for k and x0 . After performing the analysis, the best solution was chosen and the appropriate values have been taken. This
assumptions will be reviewed and discussed more in further chapters. One of the disadvantages of selecting the Weibull distribution as a model for the basic population
is that it fails in fitting the upper tail. Up to one specific point the Weibull distribution is valid and from that point on, the extreme values distributions will be used
in finding the extremes. Some specific limits were set, respectively 5 [m/s] the lower
limit to 20 [m/s] the upper limit. These specific values were set after performing simulations with different velocities. Up to the lowest value, the damage accumulation is
not sufficient enough for taking into computation. The top value was set after doing
a fit squaring of the a and b values in a 3-rd degree range polynomial. The a and b
values are needed when computing the Stress vs Number of cycles diagrams, which
will later be used inside the normal distribution. The normal distribution is used for
determining the intensity of the specific hours.
F (x) = 1 exp((
2.3.2
Storm
For extreme values of the wind speed, the extreme statistics must be applied for each
independent storm phenomenon, for frontal depressions and thunderstorms. The
exceedance probability of a reference wind speed is given as follows:
p(v > vref ) = 1
pk (v vref )
(2.27)
xm
)
x = m ln(1 Fx )
(2.28)
(2.29)
Equation 2.29 is used for generating the maximum wind speeds per storm with m
and being the mean value and standard deviation. The non exceedance probability is considered 99.9 percent. Each storm has been evaluated to 3 hours, each of
them being simulated with 1.0, 0.97 and 0.93 percent intensity. The number of storm
hours per year is computed as given in formula 2.30. The total number of basic population wind hours per year is given by withdrawing from the total number of hours
365.25, the number of storm hours obtained from 2.30.
Nsth = Nstorm 3
(2.30)
Storms tend to occur in families, this means that usually the second or the third
storm of one year are more severe than the strongest storm of years with one storm.
There are also years when no storm occurrence is reported. This is one of the reasons
why the analysis is based on all extreme values.
2.4
From stochastic point of view, the probability distributions are models which have
a specific shape and behaviour which characterise a specific set of random values
or experiments outputs, reducing them to some workable parameters. Trying to
reproduce the true behaviour of some natural or fictive processes using simulations,
you need to find a probabilistic model which allows you to describe the measurement
data in a scientific way, by transforming random variables to some representative
parameters.
A fundamental property of the distributions is that they can be univariate or multi
variate, this means that they either could describe a one parameter set or a multiparameter set. The last of them is of great help when describing events which are
constrained by a several number of factors.
The literature shows that one should be able to distinguish between the continuous
and the discrete (random) distributions, the first one being able to characterise long
and random sets of data, including intervals and floating points. The second one is
restricted by the limited number of the output, more precisely, the output is somehow expected as belonging to a certain expected set of values.
17
The most notorious discrete distributions are Bernoullis, e.g when trying to obtain
the probability of getting head when throwing a coin 10 times and Poissons probability distributions which is used for rare events and a great number of repetitions,e.g
how big is the probability of having three years without fog.
Between the most famous continuous distribution belong the Rectangular, when having a uniform distribution of the elements, the Triangular, the Beta, the Normal one
also known as the Gaussian distribution (the most used in mathematics, e.g. for a
cow which gives between 0 and 5 gallons of milk per day, telling the amount of milk
for a specific day), Log-Normal, the Exponential, the Weibull, the Pareto,e.g the
number of storm hours per storm and the Extremes. The 3 extreme value distributions are the Type I- Gumbel, Type II Frechet and Type III- The Reverse Weibull
distribution.
2.4.1
Poisson distribution
The Poisson distribution is one remarkable probability model in statistic and probabilistic science. It is usually used to describe random events in a certain time and
space limit, events which usually have a small probability of success. It is one discrete function with a high applicability
A couple of relevant examples are computing the chance to meet a bear when walking in the forest or the chances to meet a fox when driving near a hilly area, cars
arriving at a specific traffic light and .
In other words, this probability distribution is given by the probability density function f (x) as it can be seen in equation 2.31 :
f (x) =
X
e
X!
(2.31)
The cumulative probability distribution F (x) has the formula given in equation 2.32
F (x) =
k
e
kx k!
X
(2.32)
where
the mean value of the series m =
the variance 2 =
the skewness 1 =
the kurtosis 2 =
18
Properties
The expected values of such a distribution is equal to and the same apply for its
variance. The higher moments of the Poisson distribution are Touchard polynomials
in . A special case is when the expected value is equal to 1, for this particular case
the Dobinskis formula characterise the best the m-th moment which is equal with
the number of partitions of a set of m-size. The mode of a Poisson having a non integer is equal to a number which is the largest integer value less or equal to . When
having as the expected value of the Poisson distribution, the moment generating
function is given by: The moment-generating function of the Poisson distribution
with expected value is
E etX =
etk f (k; ) =
k=0
X
k=0
etk
k e
t
= e(e 1)
k!
(2.33)
2.4.2
Normal distribution
The Normal distribution, also known as the Gaussian distribution is one of the most
often used statistical probability distributions with a large applicability in nature. It
is used when dealing with real valued random variables with unknown distributions,
describing the probability of real data sets to fit inside two real valued limits. Furthermore, it has a big applicability due to the central limit theorem, which states
that for a random generated independent set of given variables, the means of each independent observed variables will be normal distributed and they tend to converge to
a common value. It is given by the density function as it can be seen in the general
formula:
1 xm 2
1
f (x, m, ) = e 2 ( 22 )
(2.34)
2
In equation 2.34, m is the mean value, 2 is the variance and the term 1/ 2 ensures that the total area under the curve will reach the value 1. The normal distribution is used as a scale for skewness and kurtosis. In this thesis, the standard normal
19
distribution will be used. The standard approach is characterized by the mean value
of 0 and variance of 1.
1
1
2
f (x) = e 2 (xred )
2
(2.35)
Z
1 um 2
1
F (x) =
e 2 ( ) du
2
(2.36)
xm
(2.37)
It is symmetric in regard to the value x=m, which is in the same instance median and mode
The density function has 2 inflection points situated at x = m and x = m+.
It is logarithmic concave and differentiable
From the total amount of samples, 68 percent will fall within 1, 95 percent
within 2, respectively 99.7 percent within 3 standard deviation of the mean.
(a) f(x)
(b) F(x)
20
2.4.3
Weibull distribution
k
x k1
x k
(
)
exp((
) )
x0
x0
x0
(2.38)
1 =
1+
3
k
3 3m 2 m3
(2.42)
3
where the standard deviation is denoted by . Where in the above given equations,
is the Gamma function, being mostly known as an extension of the factorial function.
For figure 2.14, = x0 , it can be noticed that for between 0 and 1 interval, the value
of the probability density function tends to infinity.
For the value k=1, the function tends to 1 .For k > 1, the density falls to zero as x
approaches zero from underneath, afterwards increases and decreases again.
21
2.5
I
II
III
IV
IV
2.0
=1
=1
=1
=1
k=0.5
k=1.0
k=1.5
k=5.0
1.5
II
1.0
III
0.5
0.0
0.5
1.0
1.5
2.0
2.5
2.4.4
The Generalized extreme value distribution is the generalized form of three different
types of extreme distributions. It is dependent of the location, shape and scale parameter and according to these values, it can be classified in 3 different instances as
it follows:
Type I Gumbel-distribution with its Cumulative probability density:
F (x) = exp[exp([ +
(x m)
])]
6
(2.43)
xm 1
) ]
(2.44)
In the above given equation, f1 and f2 are the free terms which are given in
formula 2.45 with being the Gamma function.
f1 = (1 + )
f2 =
(1 + 2 ) f12
(2.45)
Any independent set of consistent data of identical distributed random numbers with
a maximum number of samples k, can be fitted using a generalized extreme value
distribution. The generalized extreme value distribution is a specific distribution
which, based on the theory of extreme values, combines all 3 possible extreme value
distributions. Hence it is often used when working with extreme processes, its biggest
applicability lies in modelling the maxima or the minima of long and finite series of
random values. The GEV is described by the Cumulative probability density:
1
(2.46)
F (x) = exp[exp(y)] k = 0
(2.47)
22
2.4.5
(2.48)
1
x xs
k] k
s
(2.49)
In the formula 2.49 xs is the limit value, s is the scale parameter and k is the shape
factor.
23
Depending on the shape parameter, the General Pareto distribution can be classified
in three forms:
k < 0 indicates a finite tail of the distribution.
k = 0 indicates that the tail is decreasing exponential and so an exponential
distribution is obtained.
k > 0 indicates a decreasing tail following a polynomial decrease. In this case
the distribution has a finite upper tail of xmax = xs + ks .
Between the mean value of the exceedance,mostly used as the mean value of x xs ,
standard deviation, the scale and the shape parameter the following relations have
been developed as it can be noticed in equations 2.50 and 2.51
m
s = 0.5 m (1 + ( )2 )
(2.50)
m
k = 0.5 m (1 ( )2 )
(2.51)
The General Pareto distribution will be used with the purpose of finding out the
number of storm hours for a specific storm climate.
2.5
Simulation techniques
2.5.1
Transformation method
Combination of random variables along with complex expressions for the probability density functions or for the cumulative probability densities can lead to failure.
Therefore, the simulations are required.
The Monte Carlo simulations are performed for solving complex integrals and to sample random variables. The core of the simulations is the random number generator
which generate statistical independent random numbers in the [0,1] interval. This
numbers are the basic input for the different simulation methods which will be presented. As a random number generator the Mersenne-Twister algorithm given by
[Matsumoto and Nisimura, 1998].
Having high quality random numbers, the procedure of producing results which simulates the normal, Poisson, Beta and Parreto distribution is fulfilled.
The transformation method is one of the most used simulation methods. It uses the
inverse of the cumulative probability density, therefore the non inverse functions used
for the cumuli can not be used in this method.
The procedure as it shown in figure 2.16 and can be summarized as follows: Being
given a probability density function f(x) with < x < and its corresponding
cumulative probability density F(x)s, assume that there is a u, generated uniformly
in the (0,1) interval. If x takes any random value, excepting the end points, a unique
x from the probability density f(x) can be computed using formula 2.52 and 2.53.
u = F (x)
(2.52)
x = F 1 (u)
(2.53)
The algorithm is very helpful when the inverse function can be by hand calculated.
24
Continuous distribution
F(x)
u
-1
x=F (u)
Discrete distribution
F(x)
u
f (x)
0
x(k) x(k+1)
2.5.2
Acceptance-rejection method
C r(x)
f(x)
x
Figure 2.17: Acceptance-rejection method
The algorithm starts with generating a x value in accordance to r(x). Afterwards,
25
the f(x) value and the C r(x) are computed so that one could compare uCr(x)
f (x), where u is a random generated number. If the inequality is respected, accept x,
otherwise reject it.
2.5.3
Box-Muller
The Box-Muller method is a procedure which using as input, random numbers between 0 and 1, generates pairs of independent, normally standardized distributed
values. The algorithm was developed as an more efficient alternative for the Transformation method. The theory presents the method as follows: Being given x and y,
two uniformly distributed numbers, y (0, 1] and x (0, 1] then A and B are independent random variables normally distributed (with 0 as mean and 1 as standard
deviation) if:
A = R cos() =
B = R sin() =
2 ln x cos(2y)
2 ln x sin(2y)
(2.54)
(2.55)
The general formulation has its difficulties when implementing and can have numerical stability problems when x takes values near 0. Therefore, more computational
convenient is the Polar shape of the Box-Muller proposed by J.Bell and R.Knop.
-1
-1
B = /2, cos = u/R = u/ s and sin = v/R = v/ s. Starting from the basic
form formulation and using the above mentioned substitutions, the final equations of
the polar approach, namely the standard normal deviates, are obtained, as shown in
formula 2.56 and 2.57.
26
2ln(s)
s
(2.56)
2ln(s)
s
(2.57)
a=u
s
b=v
2.5.4
Allocation method
2.5.5
(2.58)
N
1 X
(xi mx )(yi my )
N 1 i=1
(2.59)
xy =
2
=
xy
In the general Pearsons formulation, N is the number of samples of each set, mx and
my are the mean values of the measurements and x , y are the standard deviations.
Between the most important properties of the correlation, the symmetry can be
counted: xy = yx.
For the Pearson coefficient to be computed and to take proper values, the standard
deviations of the sets must be finite and different from 0. Otherwise, values grater
than 1 could be recorded, which contradict the initial assumptions.
27
positive
correlation
negative
correlation
y
zero correlation
2.6
2.6.1
Damage accumulation
Introduction to fatigue analysis
FATIGUE
MATERIAL
PROPERTIES
STRESS
ANALYSIS
CUMULATIVE
DAMAGE
EXTERNAL
FACTORS
2.6.2
Depending on the number of cycles experienced by a structure fatigue can be classified into Low-Cycle-Fatigue and High-Cycle-Fatigue, each with its own set of characteristics. LOW cycle fatigue is characterized by a number of repetitions less than 104
and by the Coffin-Manson equation, as shown in 2.60.
= 0f (2N )c
2
29
(2.60)
With
N - number of cycles
0f - fatigue ductility coefficient
c- Fatigue ductility exponent (between -0.5 to -0.7 for metals)
-the plastic strain amplitude
2
High cycle fatigue is characterized by a number of repetitions greater than 104 and
by the material behaviour, which in the first phase acts in the elastic domain. For
predicting the lifetime of a sample the S-N curves were introduced (stress vs. number
of cycles curves) which are also known as Wohler curves or Wohler lines.
In order to determine the lifetime of a structure/element it is required to identify the
number of complete loading cycles which are sustained by a sample before loosing its
structural characteristics. The best predictions are obtained using the cycle counting methods which pair the local minima and maxima. One of the most accurate
methods, very used in the last 3-4 decades is the Rainflow counting method. This
algorithm is based on application of the Miners rule.
The Miners rule is based on the A.Palmgren algorithm and states : for k different
stress magnitudes from a spectrum Si(1 i k) each contributing ni cycles, if N i
is the number of cycles to failure of a constant stress Si , failure takes place when in
Equation 2.61 C is between 0.7 and 2.2.
ni
)=C
(2.61)
Ni
For design it is always taken 1. The rule is useful in many circumstances but has
some limitations. The first limitation is that it fails to recognize the probabilistic
nature of the fatigue namely, the sequence effect is not considered. The second limitation applies for some specific cases, where cycles caused by low stress followed by
high stress are not well predicted by the law. Furthermore, it states that damage accumulation does not depend on stress level. Despite its limitations, the Miners rule
is the most used damage accumulation model leading to failure from fatigue.
X
2.6.3
Random SN curves
The modern study of fatigue is based on the work of the German railway engineer
Augustin Whler, a technologist which developed his studies in the mid 19-th century. His research was focused on the failure of the axles after different service periods at relatively small loads. As a conclusion of his work he summarize that the
cycle amplitude loading is much more important than the peak loading. Following
his initial studies and after his work was continued, the SN curves were adopted and
developed for different materials and for different number of loading cycles.
The SN curves also known as Whler Lines are diagrams which represent for a specific material its behaviour based on the number of loading cycles and the amplitudes magnitude.It is nevertheless an empirical mean of multiplying the fatigue
process and designing against it.[12].
One of the most important purposes of these diagrams is that it helps establishing
the number of loading cycles (N) which one member can bear until a samples collapse under a cyclic load(S) which is much smaller then the limit stress, such as it
can be seen in figure 2.31. The abscissa is often plotted logarithmically because usually a great number of repetitions are required until damage is recorded. For the
ferrous materials like Steel an endurance limit exists, limit which separate the point
30
2.7
The capacity to use a numerical method must be measured in the direction that it
reduces the data volume without creating false quantities of strength. For analysing
and counting the loading cycles which lead to fatigue damage, the counting methods have been introduced. All numerical methods have in common the following
features[8]:
Decomposition of the determined stress time courses in the connected reversal
points.
The definition of a later damage rating of the elementary event should be recognised and counted.
The determination of the event parameters: class limit exceedance, peak values
or stress value of the turning points, the area between the two points, namely
31
the stress difference between the two points with the stress mean value and the
closing cycles or stress loops.
Formulation of an algorithm which, defines the events through stress recognition, keeps the values of the parameters and the order of the procedure. Except
a couple of numerical methods that lead to unsatisfactory life predictions, the
two parameter methods register and reflect the cyclic strain stress behaviour of
the materials[8].
Some of the most representative methods are: Peek counting method, Level crossing
counting method and Rainflow counting method. The latter was first time introduced in 1968 by Tatsuo Endo and M. Matsuishi with the purpose to calculate the
half cycles or the complete cycles of strain time signals. The method is created and
based on Endos analogy between rain drops which are falling from a Pagodas roof(
Pagoda - Japanese building with many eaves) and the generation of hysteresis loops.
It is made an analogy between the simple straining pattern history and the roof. The
Rainflow counting method is used for analysing the fatigue data so that one is able
to reduce the varying stress spectrum into a set of more simple stress reversals. A remarkable property of the above mentioned method is that it allows the computation
of fatigue cycles from a specific time history.
To the numerical methods basis belong the local stress strain path. The path will be
simulated assuming a material behaviour model which identifies important elements
as follows: The massing behaviour of a material is characterized by the shape of the
hysteris loop nests which corresponds to the doubled stress-strain initial curve. The
second remarkable aspect is the material memory. The material shows a remembering capacity which can be classified in 3 types.[8]
The memory 1 actions when after closing a hysteris loop who would have been
started from the first initial loading curve, the stress-strain path follows afterwards
the initial loading curve. The memory 2 starts when after closing a hysteris loop
which has started from the loop arm, the stress strain path follows the initial arm
of the loop. The memory type 3 manifests when the hysteris loop arm which has
started from the initial loading curve ends as soon as its starting point stress/strain
in the opposite quadrant is reached. The types of model descriptions of the cyclic
behaviour characterize best the metallic materials like steel. They are described for a
bilinear stress strain law after the kinematic assumption of Prager.[8]
Rainflow Algorithm
The numerical method can be used not only for counting of the cycles and of the
stress-strains, but also for counting the bending and respectively deformation of the
elements. The RCM recognises turning points of the stress path, counts closed paths
and stores the unfinished paths in a residuum value. The closed loops are stored in
a matrix, which is characterized by a class differentiation of the cycles width and
height. Furthermore, the method brings back the order of the turning points after
all uncompleted loops are removed and stored into the residuum. The residuum
is composed of two parts. The first one is made up of loops of the initial loading
curve sections and not closing hysteris loop parts. The second part contains loops
which are suitable to close and not have been closed because of the path ending. The
32
classes of the counting matrix were chosen in such a manner, that they reflect in the
best way each cycles properties.
Algorithm steps and practical example
Given a time history with amplitudes, the algorithm reduces the time history to
an alternation/sequence of peeks (tensile) and valleys (compressive). The specific
methods algorithm can be summarized as follows:
1. The time history can be imagined as a pagoda or a template for a normal sheet.
2. Turn the sheet to 900 having the earliest time step at the top.
3. The procedure considers each peak as a source of water. From each peak the water swipes down following exactly the so called pagoda roof.
4. Each half cycle must be counted taking into account flow terminations as follows:
a) The flow reaches the end of the time history.
b) The flow from a specific peak meets another flow which has the source point an
earlier peak.
c) It meets an opposite flow having a greater magnitude.
5. The above mentioned step is done 2 times: one time for the tensile peaks and the
second time for the compressive valleys. Usually, for the tensile peaks, the water
swipes from right to left and for the valleys the water swipes down from left to right.
6. After each half cycle is determined and measured, the magnitude is recorded. It is
usually computed as being the stress difference between start and termination.
7. A table must be done where each half cycles path and magnitude need to be
noted. The half cycles with the same magnitude and opposite sense are paired-up
and represent a complete cycle.
8. In the last step the number of complete cycles are counted as well as the remaining half cycles, the remaining are called residuals.
PRACTICAL EXAMPLE
Given the time series in figure 2.22, the procedure is explained for a better understanding of the method. The paper is turn to 90 degrees, having the earliest time
step at top as it can be seen in 2.23. Next, the water will start dropping following
the exact shape of the pagoda, as it is pointed in figure 2.23.
After having an ensemble look of the paths created, the results can be summed up
in the table 2.3 and sorted out based on their paths, cycles and amplitudes. A final
counting out of the stress cycles is realized, arranging them after the stress magnitudes of the cycles, as the table 2.4 highlights. As a conclusion of the given example,
one could easily see that as the total number of closed cycles is 2 and the remaining
half cycles or uncompleted paths are stored in the residuals.
33
6
4
Stress
2
0
-2 0
10
12
-4
-6
-8
Time
Stress
2 0
Stress
2 0
O
A
2
2
Time
Time
10
10
G
H
I
12
12
34
Path
O-A
A-D
B-C
D-E
E-H
F-G
H-I
Stress amplitude
13
9
8
7
5
3
H-I
A-D
F-G
B-C,E-H
D-E
O-A
35