Sei sulla pagina 1di 21

Lecture Notes: Mathematical Methods I

S Chaturvedi
August 6, 2018

Contents
1 Finite dimensional Vector Spaces 4
1.1 Vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Linear combinations, Linear Span . . . . . . . . . . . . . . . . 5
1.4 Linear independence . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Representation of a vector in a given basis . . . . . . . . . . . 6
1.8 Relation between bases . . . . . . . . . . . . . . . . . . . . . . 6
1.9 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.10 Basis for a vector space adapted to its subspace . . . . . . . . 7
1.11 Direct Sum and Sum . . . . . . . . . . . . . . . . . . . . . . . 7
1.12 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.13 Null space, Range and Rank of a linear operator . . . . . . . . 8
1.14 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.15 Invariant subspace of a linear operator . . . . . . . . . . . . . 8
1.16 Eigenvalues and Eigenvectors of a linear operator . . . . . . . 8
1.17 Representation of a linear operator in a given basis . . . . . . 9
1.18 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.19 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.20 From linear operators to Matrices . . . . . . . . . . . . . . . . 10
1.21 Rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.22 Eigenvalues and Eigenvectors of a matrix . . . . . . . . . . . . 10
1.23 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . 11

1
1.24 Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . 12
1.25 Cayley Hamilton Theorem . . . . . . . . . . . . . . . . . . . . 13
1.26 Scalar or inner product, Hilbert Space . . . . . . . . . . . . . 14
1.27 Orthogonal complement . . . . . . . . . . . . . . . . . . . . . 15
1.28 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . 15
1.29 Relation between orthonormal bases . . . . . . . . . . . . . . . 15
1.30 Gram Schmidt Orthogonalization procedure . . . . . . . . . . 15
1.31 Gram Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.32 Adjoint of a linear operator . . . . . . . . . . . . . . . . . . . 16
1.33 Special kinds of linear operators, their matrices and their prop-
erties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.34 Simultaneous Diagonalizability of Self adjoint operators . . . . 20
1.35 Simultaneous reduction of quadratic forms . . . . . . . . . . . 20
1.36 Standard constructions of new vector spaces from old ones . . 20

2
1 Finite dimensional Vector Spaces
1.1 Vector space
A vector space V is a set of mathematical objects, called vectors written as
x, y, z, u, · · · equipped with two operations - addition and multiplication by
scalars, such that the following hold
• For any pair x, y ∈ V , x+y = y+x is also in V [Closure under additon
and commutativity of addition]
• For any x, y, z ∈ V , x + (y + z) = (x + y) + z [Associativity of addition]
• There is a unique zero vector 0 ∈ V such that, for any x ∈ V , x+0 = x
[Additive identity]
• For each x ∈ V there is a vector denoted by −x such that x+(−x) = 0
[Additive inverse]
• For any scalar α and any x ∈ V , αx is also in V [ Closure under scalar
multiplication]
• For any x ∈ V 0.x = 0, 1.x = x
• For any scalar α and any pair x, y ∈ V , α(x + y) = αx + αy. Further,
for any pair of scalars α, β and any x ∈ V , α(βx) = αβx and (α+β)x =
αx + βx
Depending on whether the scalars are real numbers or complex numbers one
speaks of a real or a complex vector space. In general, the scalars may
be drawn from any field, usually denoted by F and in that case we speak
of a vector space over the field F. ( A field F is a set equipped with two
composition laws–addition and multiplication such that F is an abelian group
under addition (with ‘0’ denoting the additive identity element) and F ∗ =
F − {0} is an abelian group with respect to multiplication (with ‘1’ denoting
the multiplicative identity element). Two familiar examples are fields are the
set of real and complex numbers. Both of these are infinite fields. Another
not so familiar example of an infinite field is the field of rational numbers.
Finite fields also exist but they come only in sizes pn where p is a prime
number). In what follows we will consider only the real or the complex field.
It is, however, important to appreciate that the choice of the field is an
integral part of the definition of the vector space.

3
1.2 Examples
• Mn×m (C): the set of n × m complex matrices.

• Mn (C): the set of n × n complex matrices.

• Cn : the set of n-dimensional columns.

• Pn (t): the set of polymomials {a0 + a1 t + a2 t2 + · · · + an−1 tn−1 } in t of


degree less than n with real or complex coefficients.

( The vector space Mn (C) can also be viewed as the set of all linear operators
on Cn . We note that the vector space Cn is of special interest as all finite
dimensional vector spaces of dimension d are isomorphic to Cd as we shall
see later )

1.3 Linear combinations, Linear Span


For vectors x1 , x2 , · · · , xn ∈ V and scalars α1 , α2 , · · · , αn , we say that the
vector x = α1 x1 + α2 x2 + · · · + αn xn is a linear combination of x1 , x2 , · · · , xn
with coefficients α1 , α2 , · · · , αn .
The set of all linear combinations of a given set of vectors x1 , x2 , · · · , xn ∈
V is called the linear span of the vectors x1 , x2 , · · · , xn and is itself a vector
space.

1.4 Linear independence


A set of vectors x1 , x2 , · · · , xn ∈ V is said to be linearly independent if
α1 x1 + α2 x2 + · · · + αn xn = 0 ⇒ α1 = α2 = · · · = αn = 0. Otherwise the set
is said to be linearly dependent.

1.5 Dimension
A vector space is said to be of dimension n if there exists a set of n linearly
independent vectors but every set of n + 1 vectors is linearly dependent.
On the other hand, if for every integer n it is possible to find n linearly
independent vectors then the vector space is said to be infinite dimensional.
In what follows we will exclusively deal with finite dimensional vector
spaces.

4
1.6 Basis
In a finite dimensional vector space V of dimension n any n linearly indepen-
dent vectors x1 , x2 , · · · , xn ∈ V of linearly independent vectors are said to
provide a basis for V . In general there are infinitely many bases for a given
vector space.

1.7 Representation of a vector in a given basis


Given a basis e1 , e2 , · · · , en ∈ V any x ∈ V can be uniquely written as x =
x1 e1 +x2 e2 +· · ·+xn en . The coefficients x1 , · · · , xn , called the components of
x in the basis e1 , e2 , · · · , en , can be arranged in the form of a column vector
 
x1
 · 
x 7→ x = 
 · 

xn
In particular
   
1 0
 0   0 
e1 7→ e1 = 
 ·  , · · · , en 7→ en =  ·
  

0 1
The column vector x is called the representation of x ∈ V in the basis
e1 , e2 , · · · , en .

1.8 Relation between bases


Let e1 , e2 , · · · , en and e01 , e02 , · · · , e0n be two bases for a vector space V . Since
e1 , e2 , · · · , en is a basis each e0i can be written as a linear combination of
e1 , e2 , · · · , en :
Xn
0
ei = Sji ej
j=1

The matrix S must necessarily be invertible as since e01 , e02 , · · · , e0n is a basis
and each ei can be written as a linear combination of e01 , e02 , · · · , e0n :
n
X
−1 0
ei = Sji ej
j=1

5
Two bases are thus related to each other through an invertible matrix S –
there are as many bases in a vector space of dimension n as there are n × n
invertible matrices. The set of all n × n invertible real (complex) matrices
form a group denoted by GL(n, R)(GL(n, C)) Under a change of basis the
components x of a vector x in e1 , e2 , · · · , en are related to the components
x0 of x in the basis e01 , e02 , · · · , e0n as follows
n
X
−1
x0i = Sji xj
j=1

1.9 Subspace
A subset V1 of V which is a vector space in its own right is called a subspace
of V .

1.10 Basis for a vector space adapted to its subspace


If V1 is a subspace of dimension m of a vector space V of dimension n then a
basis e1 , e2 , · · · , em , em+1 , · · · , en such that the first m vectors e1 , e2 , · · · , em
provide a basis for V1 is called a basis for V adaped to V1 .

1.11 Direct Sum and Sum


A vector space V is said to be a direct sum of its subspacces V1 and V2 ,
V = V1 ⊕ V2 if every vector x ∈ V can be uniquely written as x = u + v
where u ∈ V1 and u ∈ V2 . The requirement of uniqueness implies that the
two subspaces V1 and V2 of V can have no vectors in common except the zero
vector. This has the consequence that dimV1 + dimV2 = dimV .
Given a subspace V1 of V , there is no unique choice for the subspace V2
such that V = V1 ⊕ V2 – there are infinitely many ways in which this can be
done.
If the requirement of the uniqueness of the decomposition x = u + v is
dropped then one says that V is a sum of V1 and V2 . In this case V1 and
V2 do have vectors other than the zero vector. The set of common vectors
themselves form a subspace of V of dimension equal to dimV1 +dimV2 −dimV .

6
1.12 Linear Operators
A linear operator A on a vector space V is a rule which assigns, to any vector
x, another vector Ax such that A(αx + βy) = αAx + βAy. for any x, y ∈ V
and any scalars α, β.
Linear operators on a vector space V of dimension n themselves form a
vector space of dimension n2 .

1.13 Null space, Range and Rank of a linear operator


Given a linear operator, the set of vectors obtained by applying A to all of
V written symbolically as AV form a subspace of V called the range of A.
The dimension of the range of A is called the rank of A The set of all vectors
x such that Ax = 0 i.e. the set of all vectors which get mapped to the zero
vector also form a subspace of V called the null space of A. Clearly the rank
of A is equal to the dimension of V minus the dimension of the null space.

1.14 Invertibility
An operator is said to be invertible if its range is the whole of V or in other
words its null space is trivial– there is no nonzero vector x ∈ V such that
Ax = 0.

1.15 Invariant subspace of a linear operator


A subspace V1 of V is said to be an invariant subspace of a linear operator
A if Ax ∈ V1 whenever x ∈ V1 .

1.16 Eigenvalues and Eigenvectors of a linear operator


A non zero vector x ∈ V is said to be an eigenvector of A if Ax = λx and λ
is called the corresponding eigenvalue. Note that if x is an eigenvector of A
corresponding to the eigenvalue λ then so is αx for any scalar α.

7
1.17 Representation of a linear operator in a given ba-
sis
It is evident that a linear operator on V , owing to linearity, is completely
specified by its action on a chosen basis e1 , e2 , · · · , en for V
n
X
Aei = Aji ej
j=1

The matrix A of the coefficients Aij is called the representation of the linear
operator in the basis e1 , e2 , · · · , en
It can further be seen that the if the linear operators A and B are re-
spectively represented by A and B respectvely in a given basis in V then the
operator AB is represented in the same basis by AB.

1.18 Change of basis


Clearly the representation of a linear operator depends on the chosen ba-
sis. If we change the basis the matrix representing the operator will also
change. Let A and A0 be the representation of the linear operator in the
bases e1 , e2 , · · · , en and e01 , e02 , · · · , e0n related to each other as
n
X
e0i = Sji ej
j=1

then the representation A and A0 are related to each other as A0 = S −1 AS.


Thus under a change of basis the representation of a linear operator undergoes
a ‘similarity’ transformation : A → A0 = S −1 AS.

1.19 Diagonalizability
A linear operator is said to be diagonalizable if one can find a basis in V
such that it is represented in that basis by a diagonal matrix. If this can
be done then clearly each of the basis vector must be an eigenvector of A.
This also means that a for an operator to be diagonalizable its eigenvectors
must furnish a basis for V i.e. the n eigenvectors of A must be linearly
independent.

8
1.20 From linear operators to Matrices
From the discussion above it is evident that for any vector space V of dimen-
sion n, whatever be its nature, after fixing a basis, we can make the following
identifications:
x ∈ V ↔ x ∈ Cn
Linear operator A on V ↔ A ∈ Mn (C)
Rank of A ↔ Rank of A
Invertibility of A ↔ Invertibility of A
Diagonalizability of A ↔ Diagonalizability of A
Eigenvalues and eigenvectors of A ↔ Eigenvalues and eigenvectors A
In mathematical terms, every finite dimensional vector space of dimension n
is isomorphic to Cn .

1.21 Rank of a matrix


The rank of an n×n matrix A = (x1 , x2 , · · · , xn ) equals the size of the largest
set of vectors x1 , x2 , · · · , xn that are linearly independent. It also equals the
size of the largest non vanishing minor of A. Alternatively one may compute
the number of linearly independent solutions to Ax = 0. This gives the
dimension of the null space of A. This number subtracted from n gives the
rank of A.

1.22 Eigenvalues and Eigenvectors of a matrix


The eigenvalue problem Ax = λx may be rewritten as (A − λI)x = 0. This
set of homogeneous linear equations has a non trivial solution if and only if
Det(A − λI) equals zero. This yields an nth degree polynomial equation in
λ:
C(λ) = λn + cn−1 λn−1 + cn−2 λn−2 · · · + c0 = 0
whose roots give the eigenvalues. The polynomial C(λ) is called the charac-
teristic polynomial and the equation C(λ) = 0 the characteristic equation of
A. It is here that the role of the field F comes to fore. In general, there is
no gurantee that an nth degree polynomial with coefficients in F has n roots
also in F. This is, however, true for the field of complex numbers and one

9
says that the complex field is algebraically complete and is the main reason
behind considering vector spaces over the complex field.
The roots λ1 , · · · , λn of the characteristic equation, the eigenvalues A may
all be distinct or some of them may occur several times. An eigenvalue that
occurs more than once is said to be degenerate and the number of times it
occurs is called its (algebraic) multiplicity or degeneracy. Having found the
eigenvalues one proceeds to construct the corresponding eigenvectors. Two
situations may arise
• An eigenvalue λk is non degenerate. In this case, there is essentially (
or upto multiplication by a scalar) only one eigenvector corresponding
to that eigenvalue.
• An eigenvalue λk occurs κ fold degenerate. In this case one may or may
not find κ linearly independent eigenvectors. Further, there is much
greater freedom in choosing the eigenvectors– any linear combination
of the eigenvectors corresponding to a degenerate eigenvalue is also an
eigevector corresponding to that eigenvalue.
Given the fact that the eigenvectors corresponding to distinct eigenvalues are
always linearly independent, we can make the following statements:
• If the eigenvalues of an n × n matrix A are all distinct then the corre-
sponding eigenvectors, n in number are linearly independent and hence
form a basis in Cn
• If this is not so, the n eigenvectors may or may not be linearly indepen-
dent. (Special kinds of matrices for which the existence of n linearly
independent eigenvectors is guranteed regardless of the degeneracies or
otherwise in its spectrum, will be considered later.)

1.23 Diagonalizability
An n × n matrix A is diagonalizable i.e. there exists a matrix S such that
S −1 AS = Diag(λ1 , · · · , λn )
if and only if the eigenvectors x1 , · · · , xn corresponding to the eigenvalues
(λ1 , · · · , λn ) are linearly independent and the matrix S is simply obtained
by putting the eigenvectors side by side:
S = (x1 x2 · · · xn )

10
In view of what has been said above, a matrix whose eigenvalues are all
distinct can certainly be diagonalized. When this is not so i.e. when one or
more eigenvalues are degenerate we may or may not be able to diagonalize
depending on whether or not it has n linearly independent eigen vectors. If
the matrix can not be diagonalized what is the best we can do? This leads us
to the Jordan canonical form ( of which the diagonal form is a special case).

1.24 Jordan Canonical Form


Consider a matrix A whose eigenvalues are (λ1 , λ2 , · · · , λn ). Some of the en-
tries in this list may be the same. Notationally it proves convenient to replace
this list by a shorter list λ̃1 , · · · , λ̃r with all distinct entries and specify the
(algebraic) multiplicity κi of the entry λ̃i , i = 1, · · · , r. (Thus, for instance,
the list (0, 5, 0.5, 1.5, 1.5, 1.5, 0.3) would get abridged to (0.5, 1.5, Pr 0.3) with
κ1 = 2, κ2 = 3, κ3 = 1). Clearly all the κ’s must add up to n : i=1 κi = n.
Now let µi , i = 1, · · · , r denote the number of linearly inependent eigenvec-
tors corresponding to the eigenvalue λ̃i . This number is also referred to as
the geometric multiplicity
Pr of λ̃i . It is evident that 1 ≤ µi ≤ κi , i = 1, · · · , r.
Further, the sum i=1 µi = ` gives the total number of linearly independent
eigenvectors of A. It can be shown that for every matrix A there is an S
such that S −1 AS = J where J, the Jordan form, can brought to a block
diagonal form J = Diag(J1 , J2 , · · · , J` ) where each block Ji , i = 1, · · · , ` has
the structure  
λ 1

 λ 1 

Ji =   · · 

 · 1 
λ
where λ is one of the eigenvalues of A. Some general statements that can be
made at this stage

• The sizes of the blocks add up to n

• The number of times each eigenvalue occurs along the diagonal equals
its algebraic multiplicity

• The number of blocks in which each eigenvalue occurs equals its geo-
metric multiplicity.

11
Needless to say that the diagonal form is a special case of the Jordan form
in which each box is of 1 dimension.
Further details concerning the sizes of the blocks, explicit construction of
S which effects the Jordan form have to be worked out case by case and will
be omitted.

1.25 Cayley Hamilton Theorem


Cayley Hamilton theorem states that every matrix satisfies its characteristic
equation. Thus if the characteristic equation of a 3 × 3 matrix is λ3 + c2 λ2 +
c1 λ + c0 then A satisfies A3 + c2 A2 + c1 A + c0 I = 0. This thus expresses A3 ,
and hence any higher power of A, as a linear combination of the A2 , A and
I. This result very useful in explicit computation of functions f (A) of any
n × n an matrix. We illustrate below the procedure for a 3 × 3 matrix.
Recall that if A has eigenvalues λ1 , · · · , λn with corresponding eigenvec-
tors x1 , · · · , xn then f (A) has eigenvalues f (λ1 ), · · · , f (λn ) with x1 , · · · , xn
as eigenvectors.
Now consider a function f (A) of, say, a 3 × 3 matrix. Cayley Hamilton
theorem tells us that computing any function of A ( which can meaningfully
be expanded in a power series in A) reduces, in this instance, to computing
powers of A upto two.

f (A) = a2 A2 + a1 A + a0 I

Only thing that remains is to determine the three coefficients a2 , a1 , a0 and


to do that we need three equations. If the three eigenvalues are distinct, by
virtue of what was said above one obtains

f (λ1 ) = a2 λ21 + a1 λ1 + a0

f (λ2 ) = a2 λ22 + a1 λ2 + a0
f (λ3 ) = a2 λ23 + a1 λ3 + a0
which when solved for a2 , a1 , a0 yield the desired result.
What if one of the eigenvalues λ1 is two fold degenerate i.e what if the
eigenvalues turn out to be λ1 , λ1 , λ2 ? We then get only two equations for the
three unknowns. It can be shown that in such a situation the third equation
needed to supplement the two equations

f (λ1 ) = a2 λ21 + a1 λ1 + a0

12
f (λ2 ) = a2 λ22 + a1 λ2 + a0
is
∂f (λ)
|λ=λ1 = 2a2 λ1 + a1
∂λ
What if all the three eigenvalues are the same i.e. if the eigenvallues turn
out to be λ1 , λ1 , λ1 . The three desired equations then would be
f (λ1 ) = a2 λ21 + a1 λ1 + a0
∂f (λ)
|λ=λ1 = 2a2 λ1 + a1
∂λ
∂ 2 f (λ)
|λ=λ1 = 2a2
∂λ2
One can easily recognise how the pattern outlined above extends to more
general situations.

1.26 Scalar or inner product, Hilbert Space


A scalar product is a rule which assigns a scalar, denoted by (x, y), to any
pair of vectors in x, y ∈ V such that
• (x, y) = (y, x)∗ (hermitian symmetry)
• (x, αy + βz) = α(x, y) + β(x, z) (linearity)
• (x, x) ≥ 0. Equality holds if and only x is 0 (Positivity)
Examples:
• Cn : (x, y) = x† y

• Mn (C) : (x, y) = Tr[x† y]


Rb
• Pn (t) : (x, y) = a dt w(t)x∗ (t)y(t), for any fixed w(t) such that w(t) ≥
0 for t ∈ (a, b)
p
A vector x is said to be normalized if its norm ||x|| ≡ (x, x) = 1. If
a vector is not normalized, it can be normalized by dividing it by its norm.
Two vectors x, y ∈ H are said to be orthogonal if (x, y) = 0. A vector space
V equipped with a scalar product is called a Hilbert space H. On a given
vector space one can define a scalar product in infinitely many ways. Hilbert
spaces corresponding to the same vector space with distinct scalar products
are regarded as distinct Hilbert spaces.

13
1.27 Orthogonal complement
Given a subspace H1 of a Hilbert space H, the set of all vectors orthogonal
to all vectors in H1 forms a subspace, denoted by H1⊥ , called the orthogonal
complement of H1 in H. Further, as the nomenclature suggests, H = H1 ⊕
H1⊥ .

1.28 Orthonormal Bases


A basis e1 , e2 , · · · , en ∈ H such that (ei , ej ) = δij is said to be an orthonormal
basis in H. If a vector x ∈ H is expressed in terms of the orthonormal basis
ei , · · · , en as x = x1 ei +· · ·+xn en then its components xi are simply equal to
(ei , x). Similarly if a linear operator is represented in an orthonormal basis
e1 , e2 , · · · , en ∈ H by a matrix A
n
X
Aei = Aji ej
j=1

then the matrix elements Aij are simply given by

Aij = (ei , Aej )

Remember that this holds only when the chosen basis is an orthonormal basis
and not otherwise.

1.29 Relation between orthonormal bases


Two orthonormal bases e1 , e2 , · · · , en and e01 , e02 , · · · , e0n are related to each
other by a unitary matrix :
n
X
e0i = Uji ej , U † U = I
j=1

Thus in an n dimensional Hilbert space there are as many orthonormal bases


as n × n unitary matrices.

1.30 Gram Schmidt Orthogonalization procedure


Given a set of linearly independent vectors x1 , x2 , · · · , xn the Gram Schmidt
procedure enables one to construct out of it an orthonormal set z1 , z2 , · · · , zn

14
in a recursive way. The first step consits constructing an orthogonal basis
y1 , y2 , · · · , yn as follows
i−1
X (yj , xi )
y1 = x1 , yi = xi − yj , i = 2, · · · , n
j=1
(yj , yj )

The desired orthonormal basis z1 , z2 , · · · , zn is then obtained by normalizing


yi
this orthogonal set zi = .
||y||
There are infinitely many orthogonalization procedures. However only the
Gram-Schimidt procedure has the advantage of being sequential – if one more
vector is added to the set the construction upto the previous step remains
unaffected.

1.31 Gram Matrix


Given a set of linearly independent vectors x1 , x2 , · · · , xn , one can associate
with it a matrix G with Gij = (xi , xj ) called the Gram matrix. A neces-
sary and sufficient condition for x1 , x2 , · · · , xn to be linearly independent is
that the determinant of the Gram matrix must be non zero. ( In fact the
determinant of a Gram matrix is always ≥ 0)

1.32 Adjoint of a linear operator


An operator, denoted by A† , such that (x, Ay) = (A† x, y) for all pairs x, y ∈
H is called the adjoint of A. Stated in terms of a basis e1 , e2 , · · · , en ∈ H
these may equivalently expressed as

(ei , Aej ) = (A† ei , ej )

(ei , Aej ) = (ej , A† ei )∗


If the basis
(ei , Aej ) = (A† ei , ej )
is chosen to be an orthonormal basis, after recognising that (ei , Aej ) and
(ei , A† ej ) are simply the matrix elements Aij A†ij of the matrices representing
A and A† respectively in the chosen basis, the last equation translates into

A† ij = A∗ji

15
i.e. the matrix for A† is simply the complex conjugate transpose of the matrix
A for A. Remember that this is so only if the basis chosen is an orthonormal
basis and is not so otherwise.

1.33 Special kinds of linear operators, their matrices


and their properties
• Self adjoint or Hermitian operator : An opearator A for which A† = A
or in otherwords (x, Ay) = (Ax, y) for all pairs x, y is called a self
adjoint operator. Such an operator can be shown to have the following
properties:
– Its eigenvalues are real
– The eigenvectors corresponding to distinct eigenvalues are orthog-
onal
– Its eigenvectors are linearly independent and therefore can always
be diagonalized regardless of whether its eigenvalues are distinct or
not. Its eigenvectors can always be chosen to form an orthonormal
basis
– In an orthonormal basis, a self adjoint operator A is represented
by a Hermitian matrix A, A† = A
– A hermitian matrix A can always be diagonalixed by a unitary
matrix U † AU = Diag
• Unitary operator: An opearator U such that (Ux, Uy) = (x, y) for all
pairs x, y is called a unitary operator. Such an operator can be shown
to have the following properties:
– Its eigenvalues are of unit modulus
– The eigenvectors corresponding to distinct eigenvalues are orthog-
onal
– Its eigenvectors are linearly independent and therefore can always
be diagonalized regardless of whether its eigenvalues are distinct or
not. Its eigenvectors can always be chosen to form an orthonormal
basis
– In an orthonormal basis, a unitary operator U is represented by a
unitary matrix U, U † U = I

16
• Positive operator : An opearator A such that (x, Ax) ≥ 0 for all pairs
x is called a positive ( or non negative) operator. For such an operator
it can be shown to have the following properties:

– its eigenvalues are ≥ 0


– It is necessarily self adjoint and hence inherits all the properties
of a self adjoint operator.

• Projection operator: A self adoint opearator P such that P 2 = P is


called a projection operator Such an operator can be shown to have the
following properties:

– its eigenvalues are either 1 or 0


– Being self adjoint, it has all the properties of a self adjoint opera-
tor.
– the number of 1’s in its spectrum give its rank.
– A projection operator P of rank m fixes an m dimensional sub-
space of H. The operator Id − P is also a projection operator of
rank n − m and fixes the orthogonal complement of the subspace
corresponding to P.
– If P1 , P2 , · · · , Pn denote the projection operators corresponding
to the one dimensional subspaces determined by an orthonormal
basis e1 , e2 , · · · , en ∈ H then

Tr[Pi Pj ] = δij Pi , P1 + P2 + · · · + Pn = Id

– If e1 , e2 , · · · , en ∈ H is an eigenbasis of of a self adjoint operator


A corresponding to the eigenvalues λ1 , · · · , λn then A may be
resolved as :

A = λ1 P1 + λ2 P2 + · · · + λn Pn Spectral Decomposition

• Real symmetric matrices: These arise in the study of quadratic forms


over the real field. An expression q(x1 , · · · , xn ) of the form
n
X
q(x1 , · · · , xn ) = Aij xi xj , Aij ∈ R, x ∈ Rn
i,j

17
, a real homogeneous polynomial of degree 2, is called a real quadratic
form in n variables. A real quadratic form can be compactly expressed
as q(,x) = xT Ax where A is a real symmetric matrix. Under a linear
change of variables x → y = S −1 x, A suffers a congruence transfor-
mation : A → A0 = S T AS. Given a real symmetric matrix A can we
always find a matrix S such that S T AS is diagonal so that the quadratic
expression in the new variables has only squares and no ‘cross terms’ ?
The answer is yes :

– Every real symmetric A has real eigenvalues


– Its eigenvectors are real and and can always be chosen to form an
orthonormal basis.
– An orthogonal matrix S, S T S = I can always be found such that
S T AS = Diag. The entries along the diagonal are the eigenvalues
of A.
– The matrix S is contructed by putting the eigenvectors of A side
by side.

• 2n dimensional real symmetric positive matrices A can always be diag-


onalized by a congruence transformation through a symplectic matrix
:  
T T 0 I
S AS = Diag, S βS = β, β =
−I 0
The entries along the diagonal are not the eigenvalues of A but rather
what are known as symplectic eigenvalues of A.
Symplectic matrices arise naturally in the context of linear canonical
transformations in the Hamiltonian formulation of classical mechanics
and quantum mechanics ( Linear canonical transformation in classi-
cal mechanics (quantum mechanics ) are those linear transformations
which preserve the fundamental Poisson brackets (commutation rela-
tions))

All these operators (matrices) are referred to as normal operators (matrices)


– A and its adjoint A† commute with each other: i.e. [A, A† ] = 0 where
[A, B] ≡ AB − BA.

18
1.34 Simultaneous Diagonalizability of Self adjoint op-
erators
Two self adjoint operators A, B, A† = A, B † = B can be diagonalized simul-
taneously by a unitary transformation if and only if the commute [A, B] = 0.
The task of diagonalizing two commuting self adjoint operators essential
consists in consisting a common eigenbasis which, as we know , can always
be chosen as an orthonormal basis. The unitary operator which effects the
simultaneous diagonalization is then obtained by putting the elements of
the common basis side by side as usual. If one of the two has degenerate
eigenvalues then its eigenbasis is also the eigenbasis of the other. More work
is needed If neither of the two has a non degenerate spectrum- suitable linear
cobinations of the eigenvectors corresponding to a degenerate eigenvalues
have to be constructed so that they are also the eigenvectors of the other.
The significance of this result in the context of quantum mechanics arises
in the process of labelling the elements of a basis in the Hilbert space by the
eigenvalues of a commuting set of operators.

1.35 Simultaneous reduction of quadratic forms


If A is a real symmetrix strictly positive matrix and B a real symmetric
matrix then there is an S such that

S T AS = Id, S T BS = Diag

This result is of consderable relevance in the context finding normal modes


of oscillations of coupled harmonic oscillators:

d2
M x = −Kx,
dt2
where M is a real positve matrix and K is a real symmetric matrix.

1.36 Standard constructions of new vector spaces from


old ones
• Quotient spaces : Given a vector space V and a subspace V1 thereof one
can decompose V into disjoint subsets using the equivalence relation
that two elements of V1 are equivalent if they differ from each other by

19
an element of V1 . The subsets, the equivalence classes themselves form
a vector space V /V1 , called the quotient of V by V1 , of dimension equal
to the difference in the dimensions of V and V1 .

• Dual of a vector space : Given a vector space V , the set of all linear
functionals on V themselves form a vector space V ∗ of the same dimen-
sion as V . Here by a linear functionnal on V one means a rule which
assigns a scalar to each element in V respecting linearity.

• Tensor product of vector spaces: Consider two vector spaces V1 and


V2 of dimensions n and m respectively. Let e1 , · · · , en and f1 , · · · , fm
denote the bases in V1 and V2 respectively. By introducing a formal
symbol ⊗, we construct a set of nm objects ei ⊗ fj ; i = 1, · · · n, j =
1, · · · , m. and decree them as the basis for a new vector space V1 ⊗ V2
of dimension nm : elements x of V1 ⊗ V2 are taken to be all linear
combinations of ei ⊗ fj ; i = 1, · · · n, j = 1, · · · , m
n X
X m
x= αij ei ⊗ fj ; i = 1, · · · n, j = 1, · · · , m
i=1 j=1

(It is assumed that the formal symbol ⊕ satisfies certain ‘common sense’
properties such as (u + v) ⊗ z = u ⊗ z + v ⊗ z; (αu) ⊗ z = α(u ⊗ z) =
u ⊗ (αz) etc. )
Here a few comments are in order:
Elements x of V1 ⊗ V2 can be divided into two categories, product or
separable vectors i.e those which can be written as u⊗v; u ∈ V1 , v ∈ V2
and non separable or entangled vectors i.e. those which can not be
written in this form.
Operators A and B on V1 and V2 may respectively be extended to
operators on V1 ⊗ V2 as A ⊗ I and I ⊗ B.
Operators on V1 ⊗V2 can be divided into two categories : local operators
i.e. those which can be written as A ⊗ B and non local operators i.e.
those which can not be written in this way.
If the operators A and B on V1 and V2 are represented by the matrices
A and B in the bases e1 , · · · , en and f1 , · · · , fm then the operator A ⊗
B is represented in the lexicographically ordered basis ei ⊗ fj ; i =

20
1, · · · n, j = 1, · · · , m by the matrix A ⊗ B. where
 
A11 B · · A1n B
 · · · · 
A⊗B =  ·

· · 
An1 B · · Ann B

This construction can easily be extended to tensor products of three or


more vector spaces.
Tensor products of vector spaces arise naturally in the description of
composite systems in quantum mchanics. The notion of entanglement
pays a crucial role in quantum information theory.

21

Potrebbero piacerti anche