Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Vector space
Consider a set V with elements called vectors and the set of real numbers
. The elements of
will be called scalars. A vector will be denoted by bold-face character like v . Suppose two
operations called vector addition and scalar multiplication respectively on V.
v + 0 = 0 + v v for any
v V.
v V such
v + (-v) = 0 = (-v) + v.
and v V
. Suppose f : S
and
. Then by definition,
The random variables defined on a probability space (S , , P) are functions on the sample space.
Therefore, the set of random variables forms a vector space with respect to the addition of
random variables and scalar multiplication of a random variable by a real number.
Subspace
Suppose W is a non-empty subset of V. W is called a subspace of V if W is a vector space
with respect to the vector addition and scalar multiplication defined on V.
For a non-empty subset W is a subspace of , it is sufficient that W is closed under the vector
addition and the scalar multiplication of V. Thus the sufficient conditions are:
(1) v, w W, (v + w) W and
(2) v W, r , rv W
Linear Independence and Basis
Consider a subset of n vectors B = {b1 ,b2 ,...,bn }
If c1b1 c2b2 .... cnbn 0 implies that
The subset B = {b1 ,b2 ,...,bn } of n LI vectors is called a basis if each v V can be expressed
as a linear combination of elements of B. The number of elements in B is called the dimension
of V. Thus B = {i, j,k} is a basis of
Norm of a vector
Suppose v is a vector in a vector space V defined over
v, w V and r
1. v 0
2. v 0 only when v 0
3. rv r v
4. v w v w ( Triangle Inequality )
A vector space V where norm is defined is called an normed vector space. For example,
following are valid norms of v [v1 v2 ...vn ]
(i)
v v12 v2 2 ...vn 2
(ii)
v max(v1 , v2 ,..., vn )
Inner Product
If v and w are real vectors in a vector space V defined over
A vector space V where an inner product is defined is called an inner product space. Following
are examples of inner product spaces:
(i)
(ii)
, w = [w1 w2 ...wn ]
f1 , f 2
f1 ( x) f 2 ( x)dx f1 , f 2 L2 ( )
Let us consider
2
z v cw then z v cw, v cw 0
v, v 2c v, w cw, cw 0
v 2c v, w c 2 w 0
2
The left-hand side of the last inequality is a quadratic expression in variable c For the above
quadratic expression to be non-negative, the discriminant must be non-positive.. So
| 2 v, w |2 4 v
w 0
| v, w |2 v
w 0
| v, w |2 v
| v, w | v w
Hilbert space
Note that the inner product induces a norm that measures the size of a vector. Thus, v w is a
measure of distance between the vectors v, w V, We can define the convergence of a
sequence of vectors in terms of this norm.
Consider a sequence of vectors
vector
vn , n 1, 2,...,
v vn for n N .
Cauchy sequence if
lim
n, m
vn , n 1, 2,...,
is said to be a
v m vn 0 .
In analysis, we may require that the limit of a Cauchy sequence of vectors is also a member of
the inner product space. Such an inner prduct space where every Cauchy sequence of vectors is
convergent is known as a Hilbert space.
Orthogonal vectors
Two vectors v and w
v, w 0
Orthogonal vectors are independent and a set of n orthogonal vectors B = {b1 ,b2 ,...,bn } forms a
basis of the n-dimensional vector space.
Orthogonal projection
It is one of the important concepts in linear algebra widely used in random signal processing.
Suppose W is a subspace of an inner product space V. Then the subset
W {v | v V, v, w 0 w W}
is called the orthogonal complement of W.
Any vector v in a Hilbert space V can be expressed as
v = w + w1
vw
min v u
uW
We omit the proof of this result. The result can be geometrically illustrated as follows:
v
w1
w
v
Gram-Schimdt orthogonalisation
W
v
< X, X > = EX 2
Similarly, two random variables X and Y are orthogonal if < X, Y > = EXY = 0
We can easily verify that EXY satisfies the axioms of inner product.
2
X1
Y1
X
Y
2
2
i 1
EXY EX iYi 0
i 1
Just like the independent random variables and the uncorrelated random variables, the
orthogonal random variables form an important class of random variables.
If X and Y are uncorrelated, then
E ( X X )(Y Y ) 0
( X X ) is orthogonal to (Y Y )
If each of X and Y is of zero-mean
Cov( X ,Y ) EXY
In this case, EXY 0 Cov( XY ) 0.