Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
S Chaturvedi
August 6, 2018
Contents
1 Finite dimensional Vector Spaces 4
1.1 Vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Linear combinations, Linear Span . . . . . . . . . . . . . . . . 5
1.4 Linear independence . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Representation of a vector in a given basis . . . . . . . . . . . 6
1.8 Relation between bases . . . . . . . . . . . . . . . . . . . . . . 6
1.9 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.10 Basis for a vector space adapted to its subspace . . . . . . . . 7
1.11 Direct Sum and Sum . . . . . . . . . . . . . . . . . . . . . . . 7
1.12 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.13 Null space, Range and Rank of a linear operator . . . . . . . . 8
1.14 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.15 Invariant subspace of a linear operator . . . . . . . . . . . . . 8
1.16 Eigenvalues and Eigenvectors of a linear operator . . . . . . . 8
1.17 Representation of a linear operator in a given basis . . . . . . 9
1.18 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.19 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.20 From linear operators to Matrices . . . . . . . . . . . . . . . . 10
1.21 Rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.22 Eigenvalues and Eigenvectors of a matrix . . . . . . . . . . . . 10
1.23 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 11
1
1.24 Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . 12
1.25 Cayley Hamilton Theorem . . . . . . . . . . . . . . . . . . . . 13
1.26 Scalar or inner product, Hilbert Space . . . . . . . . . . . . . 14
1.27 Orthogonal complement . . . . . . . . . . . . . . . . . . . . . 15
1.28 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . 15
1.29 Relation between orthonormal bases . . . . . . . . . . . . . . . 15
1.30 Gram Schmidt Orthogonalization procedure . . . . . . . . . . 15
1.31 Gram Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.32 Adjoint of a linear operator . . . . . . . . . . . . . . . . . . . 16
1.33 Special kinds of linear operators, their matrices and their prop-
erties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.34 Simultaneous Diagonalizability of Self adjoint operators . . . . 20
1.35 Simultaneous reduction of quadratic forms . . . . . . . . . . . 20
1.36 Standard constructions of new vector spaces from old ones . . 20
2
1 Finite dimensional Vector Spaces
1.1 Vector space
A vector space V is a set of mathematical objects, called vectors written as
x, y, z, u, · · · equipped with two operations - addition and multiplication by
scalars, such that the following hold
• For any pair x, y ∈ V , x+y = y+x is also in V [Closure under additon
and commutativity of addition]
• For any x, y, z ∈ V , x + (y + z) = (x + y) + z [Associativity of addition]
• There is a unique zero vector 0 ∈ V such that, for any x ∈ V , x+0 = x
[Additive identity]
• For each x ∈ V there is a vector denoted by −x such that x+(−x) = 0
[Additive inverse]
• For any scalar α and any x ∈ V , αx is also in V [ Closure under scalar
multiplication]
• For any x ∈ V 0.x = 0, 1.x = x
• For any scalar α and any pair x, y ∈ V , α(x + y) = αx + αy. Further,
for any pair of scalars α, β and any x ∈ V , α(βx) = αβx and (α+β)x =
αx + βx
Depending on whether the scalars are real numbers or complex numbers one
speaks of a real or a complex vector space. In general, the scalars may
be drawn from any field, usually denoted by F and in that case we speak
of a vector space over the field F. ( A field F is a set equipped with two
composition laws–addition and multiplication such that F is an abelian group
under addition (with ‘0’ denoting the additive identity element) and F ∗ =
F − {0} is an abelian group with respect to multiplication (with ‘1’ denoting
the multiplicative identity element). Two familiar examples are fields are the
set of real and complex numbers. Both of these are infinite fields. Another
not so familiar example of an infinite field is the field of rational numbers.
Finite fields also exist but they come only in sizes pn where p is a prime
number). In what follows we will consider only the real or the complex field.
It is, however, important to appreciate that the choice of the field is an
integral part of the definition of the vector space.
3
1.2 Examples
• Mn×m (C): the set of n × m complex matrices.
( The vector space Mn (C) can also be viewed as the set of all linear operators
on Cn . We note that the vector space Cn is of special interest as all finite
dimensional vector spaces of dimension d are isomorphic to Cd as we shall
see later )
1.5 Dimension
A vector space is said to be of dimension n if there exists a set of n linearly
independent vectors but every set of n + 1 vectors is linearly dependent.
On the other hand, if for every integer n it is possible to find n linearly
independent vectors then the vector space is said to be infinite dimensional.
In what follows we will exclusively deal with finite dimensional vector
spaces.
4
1.6 Basis
In a finite dimensional vector space V of dimension n any n linearly indepen-
dent vectors x1 , x2 , · · · , xn ∈ V of linearly independent vectors are said to
provide a basis for V . In general there are infinitely many bases for a given
vector space.
xn
In particular
1 0
0 0
e1 7→ e1 =
· , · · · , en 7→ en = ·
0 1
The column vector x is called the representation of x ∈ V in the basis
e1 , e2 , · · · , en .
The matrix S must necessarily be invertible as since e01 , e02 , · · · , e0n is a basis
and each ei can be written as a linear combination of e01 , e02 , · · · , e0n :
n
X
−1 0
ei = Sji ej
j=1
5
Two bases are thus related to each other through an invertible matrix S –
there are as many bases in a vector space of dimension n as there are n × n
invertible matrices. The set of all n × n invertible real (complex) matrices
form a group denoted by GL(n, R)(GL(n, C)) Under a change of basis the
components x of a vector x in e1 , e2 , · · · , en are related to the components
x0 of x in the basis e01 , e02 , · · · , e0n as follows
n
X
−1
x0i = Sji xj
j=1
1.9 Subspace
A subset V1 of V which is a vector space in its own right is called a subspace
of V .
6
1.12 Linear Operators
A linear operator A on a vector space V is a rule which assigns, to any vector
x, another vector Ax such that A(αx + βy) = αAx + βAy. for any x, y ∈ V
and any scalars α, β.
Linear operators on a vector space V of dimension n themselves form a
vector space of dimension n2 .
1.14 Invertibility
An operator is said to be invertible if its range is the whole of V or in other
words its null space is trivial– there is no nonzero vector x ∈ V such that
Ax = 0.
7
1.17 Representation of a linear operator in a given ba-
sis
It is evident that a linear operator on V , owing to linearity, is completely
specified by its action on a chosen basis e1 , e2 , · · · , en for V
n
X
Aei = Aji ej
j=1
The matrix A of the coefficients Aij is called the representation of the linear
operator in the basis e1 , e2 , · · · , en
It can further be seen that the if the linear operators A and B are re-
spectively represented by A and B respectvely in a given basis in V then the
operator AB is represented in the same basis by AB.
1.19 Diagonalizability
A linear operator is said to be diagonalizable if one can find a basis in V
such that it is represented in that basis by a diagonal matrix. If this can
be done then clearly each of the basis vector must be an eigenvector of A.
This also means that a for an operator to be diagonalizable its eigenvectors
must furnish a basis for V i.e. the n eigenvectors of A must be linearly
independent.
8
1.20 From linear operators to Matrices
From the discussion above it is evident that for any vector space V of dimen-
sion n, whatever be its nature, after fixing a basis, we can make the following
identifications:
x ∈ V ↔ x ∈ Cn
Linear operator A on V ↔ A ∈ Mn (C)
Rank of A ↔ Rank of A
Invertibility of A ↔ Invertibility of A
Diagonalizability of A ↔ Diagonalizability of A
Eigenvalues and eigenvectors of A ↔ Eigenvalues and eigenvectors A
In mathematical terms, every finite dimensional vector space of dimension n
is isomorphic to Cn .
9
says that the complex field is algebraically complete and is the main reason
behind considering vector spaces over the complex field.
The roots λ1 , · · · , λn of the characteristic equation, the eigenvalues A may
all be distinct or some of them may occur several times. An eigenvalue that
occurs more than once is said to be degenerate and the number of times it
occurs is called its (algebraic) multiplicity or degeneracy. Having found the
eigenvalues one proceeds to construct the corresponding eigenvectors. Two
situations may arise
• An eigenvalue λk is non degenerate. In this case, there is essentially (
or upto multiplication by a scalar) only one eigenvector corresponding
to that eigenvalue.
• An eigenvalue λk occurs κ fold degenerate. In this case one may or may
not find κ linearly independent eigenvectors. Further, there is much
greater freedom in choosing the eigenvectors– any linear combination
of the eigenvectors corresponding to a degenerate eigenvalue is also an
eigevector corresponding to that eigenvalue.
Given the fact that the eigenvectors corresponding to distinct eigenvalues are
always linearly independent, we can make the following statements:
• If the eigenvalues of an n × n matrix A are all distinct then the corre-
sponding eigenvectors, n in number are linearly independent and hence
form a basis in Cn
• If this is not so, the n eigenvectors may or may not be linearly indepen-
dent. (Special kinds of matrices for which the existence of n linearly
independent eigenvectors is guranteed regardless of the degeneracies or
otherwise in its spectrum, will be considered later.)
1.23 Diagonalizability
An n × n matrix A is diagonalizable i.e. there exists a matrix S such that
S −1 AS = Diag(λ1 , · · · , λn )
if and only if the eigenvectors x1 , · · · , xn corresponding to the eigenvalues
(λ1 , · · · , λn ) are linearly independent and the matrix S is simply obtained
by putting the eigenvectors side by side:
S = (x1 x2 · · · xn )
10
In view of what has been said above, a matrix whose eigenvalues are all
distinct can certainly be diagonalized. When this is not so i.e. when one or
more eigenvalues are degenerate we may or may not be able to diagonalize
depending on whether or not it has n linearly independent eigen vectors. If
the matrix can not be diagonalized what is the best we can do? This leads us
to the Jordan canonical form ( of which the diagonal form is a special case).
• The number of times each eigenvalue occurs along the diagonal equals
its algebraic multiplicity
• The number of blocks in which each eigenvalue occurs equals its geo-
metric multiplicity.
11
Needless to say that the diagonal form is a special case of the Jordan form
in which each box is of 1 dimension.
Further details concerning the sizes of the blocks, explicit construction of
S which effects the Jordan form have to be worked out case by case and will
be omitted.
f (A) = a2 A2 + a1 A + a0 I
f (λ1 ) = a2 λ21 + a1 λ1 + a0
f (λ2 ) = a2 λ22 + a1 λ2 + a0
f (λ3 ) = a2 λ23 + a1 λ3 + a0
which when solved for a2 , a1 , a0 yield the desired result.
What if one of the eigenvalues λ1 is two fold degenerate i.e what if the
eigenvalues turn out to be λ1 , λ1 , λ2 ? We then get only two equations for the
three unknowns. It can be shown that in such a situation the third equation
needed to supplement the two equations
f (λ1 ) = a2 λ21 + a1 λ1 + a0
12
f (λ2 ) = a2 λ22 + a1 λ2 + a0
is
∂f (λ)
|λ=λ1 = 2a2 λ1 + a1
∂λ
What if all the three eigenvalues are the same i.e. if the eigenvallues turn
out to be λ1 , λ1 , λ1 . The three desired equations then would be
f (λ1 ) = a2 λ21 + a1 λ1 + a0
∂f (λ)
|λ=λ1 = 2a2 λ1 + a1
∂λ
∂ 2 f (λ)
|λ=λ1 = 2a2
∂λ2
One can easily recognise how the pattern outlined above extends to more
general situations.
13
1.27 Orthogonal complement
Given a subspace H1 of a Hilbert space H, the set of all vectors orthogonal
to all vectors in H1 forms a subspace, denoted by H1⊥ , called the orthogonal
complement of H1 in H. Further, as the nomenclature suggests, H = H1 ⊕
H1⊥ .
Remember that this holds only when the chosen basis is an orthonormal basis
and not otherwise.
14
in a recursive way. The first step consits constructing an orthogonal basis
y1 , y2 , · · · , yn as follows
i−1
X (yj , xi )
y1 = x1 , yi = xi − yj , i = 2, · · · , n
j=1
(yj , yj )
A† ij = A∗ji
15
i.e. the matrix for A† is simply the complex conjugate transpose of the matrix
A for A. Remember that this is so only if the basis chosen is an orthonormal
basis and is not so otherwise.
16
• Positive operator : An opearator A such that (x, Ax) ≥ 0 for all pairs
x is called a positive ( or non negative) operator. For such an operator
it can be shown to have the following properties:
Tr[Pi Pj ] = δij Pi , P1 + P2 + · · · + Pn = Id
A = λ1 P1 + λ2 P2 + · · · + λn Pn Spectral Decomposition
17
, a real homogeneous polynomial of degree 2, is called a real quadratic
form in n variables. A real quadratic form can be compactly expressed
as q(,x) = xT Ax where A is a real symmetric matrix. Under a linear
change of variables x → y = S −1 x, A suffers a congruence transfor-
mation : A → A0 = S T AS. Given a real symmetric matrix A can we
always find a matrix S such that S T AS is diagonal so that the quadratic
expression in the new variables has only squares and no ‘cross terms’ ?
The answer is yes :
18
1.34 Simultaneous Diagonalizability of Self adjoint op-
erators
Two self adjoint operators A, B, A† = A, B † = B can be diagonalized simul-
taneously by a unitary transformation if and only if the commute [A, B] = 0.
The task of diagonalizing two commuting self adjoint operators essential
consists in consisting a common eigenbasis which, as we know , can always
be chosen as an orthonormal basis. The unitary operator which effects the
simultaneous diagonalization is then obtained by putting the elements of
the common basis side by side as usual. If one of the two has degenerate
eigenvalues then its eigenbasis is also the eigenbasis of the other. More work
is needed If neither of the two has a non degenerate spectrum- suitable linear
cobinations of the eigenvectors corresponding to a degenerate eigenvalues
have to be constructed so that they are also the eigenvectors of the other.
The significance of this result in the context of quantum mechanics arises
in the process of labelling the elements of a basis in the Hilbert space by the
eigenvalues of a commuting set of operators.
S T AS = Id, S T BS = Diag
d2
M x = −Kx,
dt2
where M is a real positve matrix and K is a real symmetric matrix.
19
an element of V1 . The subsets, the equivalence classes themselves form
a vector space V /V1 , called the quotient of V by V1 , of dimension equal
to the difference in the dimensions of V and V1 .
• Dual of a vector space : Given a vector space V , the set of all linear
functionals on V themselves form a vector space V ∗ of the same dimen-
sion as V . Here by a linear functionnal on V one means a rule which
assigns a scalar to each element in V respecting linearity.
(It is assumed that the formal symbol ⊕ satisfies certain ‘common sense’
properties such as (u + v) ⊗ z = u ⊗ z + v ⊗ z; (αu) ⊗ z = α(u ⊗ z) =
u ⊗ (αz) etc. )
Here a few comments are in order:
Elements x of V1 ⊗ V2 can be divided into two categories, product or
separable vectors i.e those which can be written as u⊗v; u ∈ V1 , v ∈ V2
and non separable or entangled vectors i.e. those which can not be
written in this form.
Operators A and B on V1 and V2 may respectively be extended to
operators on V1 ⊗ V2 as A ⊗ I and I ⊗ B.
Operators on V1 ⊗V2 can be divided into two categories : local operators
i.e. those which can be written as A ⊗ B and non local operators i.e.
those which can not be written in this way.
If the operators A and B on V1 and V2 are represented by the matrices
A and B in the bases e1 , · · · , en and f1 , · · · , fm then the operator A ⊗
B is represented in the lexicographically ordered basis ei ⊗ fj ; i =
20
1, · · · n, j = 1, · · · , m by the matrix A ⊗ B. where
A11 B · · A1n B
· · · ·
A⊗B = ·
· ·
An1 B · · Ann B
21