Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
November 2008
Principal objective:
Specific objectives:
I trial
I outcome, i
I sample space, = {1 , . . . , N }
Remark
It must be pointed out that since i are just elements that belong
to the sample space, i , then i j = i 6= j, where is
the empty set.
There is also the possibility of obtaining more than 1 outcome for
each trial that is performed, so it is necessary to define
mathematically this possilbility in order to group all of these
outcomes into a single set.
Definition
An event A is defined as a subset of the sample space, A .
An event is allowed to have more than one statement, but in any
case it always must satisfy that the statements must be connected
at all times with logical statements such as and, or, etc.
Example
Let us observe the random phenomena of coin flipping. There can
either be 2 outcomes, heads (H) or tails (T ). So with no loss of
generality, we can define 1 as H and 2 as T, then = {H, T },
which is the sample space.
Example
For a given year, the world cup will only host 3 teams; Italia, Brazil
and France. The sample space of the possible champion is
= {I , B, F }. The sample space for the finalists will be
= {IB, BF , IF }. If the sample space of the first and second
place of the world cup is needed, the sample space increases its
cardinality and will become = {IB, BI , BF , FB, IF , FI }
Since there is many options to assemble an event A with a given
sample space , the it is interesting to define a set that contains
all of the possible events.
Definition
A power set of is defined as a set that contains as elements all
possible subsets of , and it can be written formally as P(). Any
subset of P() is called a family of sets of .
Example
If = {A, B}, then P() = {{}, {A}, {B}, {A, B}}
So basically, power set can be viewed as an operation over a
certain set, and it operates in such a way that it creates a set that
contains all the possible combinations of the original set.
Definition
A algebra of a given set is a collection of subsets of ,
[N
that is Ai , such that = Ai , that is closed under
i
complementation and satisfies that its members are countable
unions. More formally, a subset P() is a algebra with
the following properties:
1. is nonempty, such that ,
2. If A , then AC is also satisfied, where AC is the
complement of A, that can also be viewed as A.
3. The union of countably many sets in is also in , as well as
the intersection.
[
\
Aj , Aj
j=1 j=1
Having in mind the general concepts of power set and algebra,
it is possible to return to the primary interest of measuring certain
events.
Definition
Given any set of that represents a sample space, and a
algebra on , then P is a probability measure if:
1. P is non negative
2. P() = 0
3. P() = 1
:R (2)
Since the random variable already has a assigned probability
measure P, then it is rather easy to compute the probability of
certain simple events, and from there, to define elementary
concepts to understand better the behavior of generical random
variables.
Definition
The cumulative distribution function (CDF) F (z), is defined as
the probability of the event in which the random variable is less or
equal than a certain threshold that is in the real numbers, this can
be read more formally as:
1 (z)2
p (z) = (z) = e 2 2 (10)
2 2
E [] = and V [] = 2
I Uniform U(a, b)
1
ba for az b
p (z) = (11)
0 otherwise
I Exponential Exp()
e z for
0z
p (z) = (12)
0 otherwise
Figure: This figure corresponds to Gaussian distribution.
Functions of random variables
Clearly, a function g : R R that has as an argument a random
variable such that = g (), then is also a random variable.
The CDF is given by:
Z Z
F = P(g () y ) = dF (y ) = I[ : g ()y ] dF (z) (13)
R R
from this, the expected value and the variance are respectively:
R
E [g ()] = R g (z)dF (z)
(15)
R 2
V [g ()] = R (g (z) E [g ()]) dF (z)
Random vectors
() = [1 (1 ), . . . , N (N )]T (16)
Fi = F (, . . . , , zi , , . . . , ) (22)
p (z)
pj |i (zj |zi ) = (23)
pi (zi )
Remark
The random variables 1 , . . . , N are said to be independent if the
distribution F can be expressed as:
N
Y
F (z1 , . . . , zN ) = Fi (zi ) (24)
i=1
= = E [] + VD (30)
Stochastic processes and random fields
Definition
(x, ) is said to be a second order random field if
for all x D
Example
In this example, a realization of a random field is performed, given
a correlation field function, and considering that the random field
is gaussian.
where Z +
1
() = (t)e it dt (44)
2
(t) = (e t e t )
p
(45)
Vk N(0, 1)
G () = 2() is the one sided power spectral density for in the
positive reals and G () = 0 elsewhere
Evolutionary stochastic processes
Different techniques have been used to solve this problem, such is
the case of [Liu(1972)], who used the instantaneous power spectral
density to incorporate both the frequency and the time domain at
the same time in the following way:
Z tf Z tf Z +
2
P= f (t) = (t, )ddt (51)
0 0
where (t) is the modulating function, Ht (t) is the heavy side step
function, and si (t) are the stationary stochastic processes.
[Hammond(1968)], [Shinozuka(1970)] and [Kameda(1980)] used a
evolutionary spectral function in time, to calculate ground
response, using the Fourier-Stiljes transform.
Z +
f (t) = A(t, )e it dZ () (53)
p = E [ p ], p = 1, 2, . . . (61)
and the assumption for the future is that all of these moments
exist and they are of finite value.
The inner product of two polynomials p and q relative to the
measure dF is then well defined as:
Z
hp, qidF = p(z)q(z)dF = E [p(z)q(z)] (62)
R
E [Hk2 ]
k =
E [Hk2 ]
(64)
E [Hk2 ]
k = 2 ]
E [Hk1
Figure: This figure corresponds to Figure: This figure corresponds
Hermite polynomials. Jacobi polynomials.
It can be seen in the three-term recurrence relation expressed in
equation (64), the term that defines the series of k is not defined
for k = 0. In the case when the weights are not representing the
PDF of a random variable,
R then the first term of the recurrence
relation will be H0 = R dp (z)dz = 0 . it must be noted that in
the case of a PDF weight function, H0 = 1 as itwas stated before.
Placing the coefficients k on the diagonal and k on the two
side diagonals of a matrix produces what is called the Jacobi
matrix of the measure dF ,
0 1 0 . . . 0
1 1 2
. .
J(dF ) = 0 2 2 . (65)
..
. . .. . ..
0 0
The Jacobi matrix is clearly tri-diagonal, real, symmetric of infinite
order. With this in mind, then the three term recurrence relation
can also be written as:
p p
k+1 Hk+1 () = ( k )Hk () k Hk1 () (66)
with H() = [H0 (), . . . , Hn1 ()]T and Jn are the eigenvalues of
the Jacobi which are at the zeros of H(), and H() are the
corresponding eigenvectors where are the points of the zeros of
H().
Example
In the following example, well known orthogonal polynomials will
be presented
I Uniform random variables, i.e. U(1, 1), which are
referred to as Legendre polynomials
2k + 1 k
Hk+1 () = Hk () Hk1 ()
k +1 k +1
(68)
1
E [Hi Hk ] = ik
2(k + 12 )
Example
I Gaussian random variables, i.e. N (0, 1), referred to as
Hermite polynomials.
Hk+1 () = Hk () kHk1 ()
(69)
E [Hi Hk ] = k!ik
for i = 0, . . . , n and j = 0, 1, . . . , i 1, i + 1, . . . , n
It is important to compute the error associated with the use of the
interpolating polynomial Ln f (z), a polynomial that is interpolating
the values {yi } at the nodes {zi }, that are related to the succesive
evaluations of the function f (z) in the nodes.
Theorem
Let us assume the nodes z0 , . . . , zn , that are going to be used to
interpolate the function f (z) C n+1 (Iz ), where Iz is the smallest
interval length associated with the nodal points z1 , . . . , zn+1 , then
the error at any point z of the domain in which the function f (z)
is defined will be given by the following expression:
f (n+1) ()
En (z) = f (z) Ln f (z) = n+1 (z) (74)
(n + 1)!
n
Y
with Iz and n+1 (z) = (z zi ) is the nodal polynomial of
i=0
degree n + 1
Remark
It can be shown that the Lagrange polynomials can be written as:
n
X n+1 (z)
Ln (z) = 0 yi (75)
(z zi )n+1 (zi )
i=0
Rn (f ) = 0 if f P2n1 (77)
It is well known that that the nodes i are the zeros of H(; dF ).
Gauss-Radau formula
Here, RnR (f ) = 0 for all f P2n and iR are the zeros of H(; dFR )
where dFR = (z a)dF . This is called the Gauss-Radau formula.
Gauss-Lobatto formula
If the support of dF is contained in the finite interval [a, b], it is
more attractive to prescribe 2 nodes, the points a and b.
Maximazing the degree of exactness subject to these constrains
yields the Gauss-Lobatto formula,
Z b n
X
f (z)dF (z) = 0L f (a) + iL f (iL ) +n+1
L
f (b) +Rna,b (f ) (79)
a i=1
the latter integral can be resolved very efficiently using cosine series
a0 X
f (cos()) = + ak cos(k) (82)
2
k=1
Definition
A tensor product space is defined as
T
N = span{Hi , j(i) N} (88)
dim(T
N ) = (N + 1)
M
(89)
Definition
A sparse product space is defined as
Z N Z
Y
p (z)f (z)dz = pi (zi )Pi (L(i ))dzi (96)
RN i=1 R
where
Nj1 N
jN
X X
INT g () = g (1k1 , . . . , NkN )Lk1 (1 ) LkN (N )
k1 =0 kN =0
(100)
Hence the sparse interpolation is obtained as a linear combination
of tensor product interpolations where the tensor grid (set of
points where the function is evaluated) is
and [
HjS = HjT (102)
NM+1|j|N
i = U i U i1 (107)
wi cos(i x) + sin(i x)
i (x) = q (117)
2 sin(2) 1 sin(2) 2
2 + (1 + 2 ) + 2 (1 2 ) + sin ()
CD = BD (123)
where the different matrices are defined as follows
R
Bij = D i (x)j (x)dx
R R
Cij = D D C (x, y)i (x)j (y)dxdy
(124)
Dij = dij
ij = ij j
and
Z Z
(C )kl E [k l ] = C (x, y)hk (x)hl (y)dxdy (128)
D D
x D
Minimize V [(x, ) N (x, )] (130)
subjected to E [(x, ) N (x, )] = 0
The condition of problem (130) requires that
C i = i i (138)
hence, if the substitution of (138) into (129) and solving the OLE
problem yields
N
X i () T
N (x, ) = (x) + i C (139)
i=1
i
L()(u) = f in D (141)
(x, ) = (x; a )
(145)
f (x, ) = f (x; f )
thus uN = uN (x; a , f )
Stochastic collocation methods
Let uh (x, ) be the semi-discrete solution of
Z Z
(x, )uh (x, ) vh (x)dx = f (x)vh (x)dx vh Vh
D D
(146)
Consider a sparse interpolation formula INS : L2p (RN ) SN that
uses a sparse grid HN S . The stochastic collocation FEM solution is
Lint
v = Lv
ext
(148)
100
110
120
mean response
130
140
150
160
170
180
1
0.5 1
0 0.5
0
0.5
0.5
eta2 1 1
eta1
Stochastic Galerkin approximation using double orthogonal
polynomials
Let us consider the diffusion problem with no reaction for simplicity
D Rd
((x, )u) in
(164)
u=0 on D
[Babuska et al.(2004)Babuska, Tempone & Zouraris] developed an
efficient method to solve stochastic finite elements, using a
stochastic Galerkin approach, that incorporates double orthogonal
polynomials in the probabilistical domain, to obtain a number of
undecoupled systems, each of the size of a deterministic realization
of the same problem. Using a p h finite element version, and
withought any loss of generality, attention is focused in finding a
solution uhp V h Sp such that
Z Z
p
E (x, )uh (x, ) v (x, )dx = E f (x, )v (x, )dx
D D
(165)
Let {H()} be a basis for the subspace zp L2p (), and {(x)}
be a basis for the subspace Vh H01 (D). Then an approximation
of the solution can be
uhp (x, ) =
X
uij Hj ()i (x) (166)
j,i
where Z
Gil0 E [(x, )]i (x) l (x)dx
ZD (171)
p
and Gil = bp (x)i (x) l (x)dx
D
Since Hk sp , with a multi-index p = [p1 , . . . , pN ], it is enough to
take it as a product
N
Y
Hk (z) = Hkr (zr ) (172)
r =1
Sff (n )
E [u 2 (t)] = (194)
2n3