Sei sulla pagina 1di 27

Lecture Notes By M. G. P.

Prasad, IITG

Class Notes for the Lectures on the Topic of Vector Spaces (MA102)

1 Vector Spaces
Definition 1.1. Let G be a non-empty set. If a binary operation ∗ on S satisfies the
following properties:

1. Closure Property: a ∗ b ∈ G for all a and b in G.

2. Associative Property: a ∗ (b ∗ c) = (a ∗ b) ∗ c for all a, b, c ∈ G.

3. Identity Property: There exists an element e ∈ G such that

a∗e=e∗a=a for all a ∈ G .

Here the element e is called the identity element of G with respect to the binary
operation ∗.

4. Inverse Property: For every a ∈ G, there exists an inverse element a−1 ∈ G such that

a−1 ∗ a = a ∗ a−1 = e .

then (G, ∗) is called a group.

Examples of Groups:

• G = Z or Q or R with + (usual addition) is a group.

• G = Q \ {0} or R \ {0} with × (usual multiplication) is a group.

• G = Set of all invertible, square matrices with matrix multiplication is a group.

• Let n be a natural number. In modular arithmetic, Zn = {0, 1, · · · , (n − 1)} is a


group under addition modulo n.

• Let n be a natural number. In modular arithmetic, Zn \ {0} = {1, · · · , (n − 1)} is


a group under multiplication modulo n, called the multiplicative group of integers
modulo n if and only n is a prime number.

Definition 1.2. A group (G, ∗) is said to be an abelian (or commutative) group if its
binary operation satisfy

CommutativeP roperty : a∗b=b∗a for all a, b ∈ G .

Examples of Abelian Groups:

Page No. 1
Lecture Notes By M. G. P. Prasad, IITG

• G = Z or Q or R with + (usual addition)is an abelian group.

• G = Q \ {0} or R \ {0} with × (usual multiplication) is an abelian group.

• Let n be a natural number. In modular arithmetic, Zn = {0, 1, · · · , (n − 1)} is an


abelian group under addition modulo n.

Examples of Non-Abelian Groups:


The set of all invertible, square matrices with matrix multiplication is a non-abelian group.

Definition 1.3. Let F be a non-empty set together with two binary operation + (addition,
say) and ∗ (multiplication, say) such that

1. (F, +) is an abelian group.

2. (F \ {0}, ∗) is an abelian group, where 0 is the identity element with respect to +.

3. Multiplication distributes over addition:

a ∗ (b + c) = a ∗ b + a ∗ c for all a, b, c ∈ F .

Then, (F, +, ∗) is called a field.

In a field F = (F, +, ∗), the identity element with respect to the addition + is denoted
by 0 and the identity element with respect to the multiplication ∗ is denoted by 1.
The binary operation multiplication is normally denoted by · and a · b may be simply
written as ab.

Examples of Fields: With usual addition + and multiplication ·, each of the sets: Q, R
and C forms a field.

Definition 1.4. Let V be a non-empty set and F be a scalar field such that the operations
vector addition and scalar multiplication are defined on them.
We say that V is a vector space (or colorblue linear space) over the field F if

1. Vector Addition satisfies: (V, +) is an abelian group.

2. Scalar Multiplication satisfies

(i) 1 v = v for every v ∈ V .


(ii) (α1 α2 ) v = α1 (α2 v) for any α1 , α2 ∈ F and v ∈ V .
(iii) α(v1 + v2 ) = αv1 + αv2 for any α ∈ F and v1 , v2 ∈ V .
(iv) (α1 + α2 )v = α1 v + α2 v for any α1 , α2 ∈ F and v ∈ V .

Page No. 2
Lecture Notes By M. G. P. Prasad, IITG

The elements of V are called the vectors and the elements of F are called the scalars.

When there is no confusion between V and F , we simply write V is a vector space without
mentioning the field F . Whenever it is desirable to specify the field F , we shall say V is a
vector space over the field F .
In the MA102 course, the field F is normally taken as the real field R. That is, if the field
is not mentioned, then one has to take F = R. Sometimes we take the complex field C,
and in that case it will be explicitly mentioned. If a field other than R and C is taken then
it will be explicitly specified.

A vector space V over the real field R is called a real vector space.
A vector space V over the complex field C is called a complex vector space.

Notation: The vector spaces are denoted by the capital letters like U , V , W , V1 , V2 ,
etc. The elements of a vector space V are denoted by the small letters like u, v, w, v1 ,
v2 , etc. The zero vector in V is denoted by 0 or by ~0. The elements of the scalar field F
are denoted by the small letters a, b, c, d, c1 , c2 , etc. or by Greek alphabets like α, β, γ, etc.

Examples of Vector Spaces


Example 1: V = Rn and F = R.
Let x = (x1 , x2 , · · · , xn ) and y = (y1 , y2 , · · · , yn ) be any two vectors in V .
Vector Addition:
x + y = ((x1 + y1 ), (x2 + y2 ), · · · , (xn + yn )) for x, y ∈ V .
Scalar Multiplication:
α x = (αx1 , αx2 , · · · , αxn ) for any x ∈ V and α ∈ F .

Example 2: V = Cn and F = R.
Let z = (z1 , z2 , · · · , zn ) and w = (w1 , w2 , · · · , wn ) be any two vectors in V .
Vector Addition:
z + w = ((z1 + w1 ), (z2 + w2 ), · · · , (zn + wn )) for z, w ∈ V .
Scalar Multiplication:
α z = (αz1 , αz2 , · · · , αzn ) for any z ∈ V and α ∈ F .

Example 3: V = Rn and F = Q.
Let x = (x1 , x2 , · · · , xn ) and y = (y1 , y2 , · · · , yn ) be any two vectors in V .
Vector Addition:
x + y = ((x1 + y1 ), (x2 + y2 ), · · · , (xn + yn )) for x, y ∈ V .

Page No. 3
Lecture Notes By M. G. P. Prasad, IITG

Scalar Multiplication:

α x = (αx1 , αx2 , · · · , αxn ) for any x ∈ V and α ∈ F .

Example 4: V = {X = (x1 , x2 , x3 , · · · ) : xi ∈ R for each i} = Set of all real sequences


and F = R.
Vector Addition:

X + Y = ((x1 + y1 ), (x2 + y2 ), · · · ) for X, Y ∈ V .

Scalar Multiplication:

αX = (αx1 , αx2 , · · · ) for any X ∈ V and α ∈ F

Example 5: Let X be a non-empty set. V = {f | f : X → R} = Set of all real valued


functions defined on the set X and F = R.
Vector Addition:
(f + g)(x) = f (x) + g(x) for f, g ∈ V .
Scalar Multiplication:

(αf )(x) = α f (x) for any f ∈ V and α ∈ F

Example 6: V = Set of all polynomials defined on the set R and F = R.


Vector Addition:
(P1 + P2 )(x) = P1 (x) + P2 (x) for P1 , P2 ∈ V .
Scalar Multiplication:

(αP )(x) = α P (x) for any P ∈ V and α ∈ F

Example 7: V = {A : A is an m × n matrix with entries in F } = Set of all m × n


matrices whose entries are in F where F is a field. It is denoted by Mm,n (F ).
Let A = (aij )m×n and B = (bij )m×n .
Vector Addition:
A + B = (aij + bij )m×n for A, B ∈ V .
Scalar Multiplication:

αA = (α aij )m×n for any A ∈ V and α ∈ F

Page No. 4
Lecture Notes By M. G. P. Prasad, IITG

Example 8: V = {y(x) : y(x) is a solution of y 00 + ay 0 + b = 0 } = Set of all solutions of


the homogenous differential equation y 00 + ay 0 + b = 0 and F = R.
Here a and b are fixed real constants.

Example 9: V = {f | f : [a, b] → R} = Set of all functions that are continuous on the


interval [a, b] and F = R.
Vector Addition:
(f + g)(x) = f (x) + g(x) for f, g ∈ V .
Scalar Multiplication:

(αf )(x) = α f (x) for any f ∈ V and α ∈ F

Example 10: Let F =L {0, 1}.


Addition Operation: Addition
N modulo 2
Multiplication
L N Operation: multiplication modulo 2.
Then, (F, , ) forms a group. It is called a finite group, since the number of elements
in F is finite.
Let Ω be a (fixed) non-empty set.
Let V be the power set of Ω = Set of all subsets of Ω.
Vector Addition:
[
A + B = A∆B = (A \ B) (B \ A) for A, B ∈ V .

Scalar Multiplication:

αA = A if α = 1 and Ø if α = 0 for any A ∈ V .

Check: V is a vector space over the field F .


Note: Ø is a zero vector. Each vector A has self-additive inverse.

Theorem 1.1. In any vector space V over the field F , the following holds:

1. 0 u = 0 for any u ∈ V .

2. α 0 = 0 for any α ∈ F .

3. (−1) u = −u for any u ∈ V .

4. If cu = 0 then c = 0 or u = 0.

Page No. 5
Lecture Notes By M. G. P. Prasad, IITG

Proof.
Let 0 denote the scalar zero in the field F . Let 0 denote the vector zero in the vector space
V.
Proof of 1:
To show: 0 u = 0 for any u ∈ V .

0 · u = (0 + 0) · u = 0 · u + 0 · u .
Now

0 = (0·u)+(−(0·u)) = (0 · u + 0 · u)+(−(0 · u)) = (0·u)+((0 · u) + (−(o · u))) = (0·u)+0 = (0·u)

Thus,
(0 · u) = 0 .

Proof of 2:
To show: α 0 = 0 for any scalar α ∈ F .

α · 0 = α · (0 + 0) = α · 0 + α · 0 .
Now,

0 = (α · 0) + (−(α · 0)) = (α · 0 + α · 0) + (−(α · 0)) = (α · 0) + ((α · 0) + (−(α · 0)))

= (α · 0) + 0 = (α · 0)
Thus,
(α · 0) = 0 .

Proof of 3:
To show: (−1) · u = −u for any vector u ∈ V .

0 = 0 · u = (1 − 1) · u = 1 · u + ((−1) · u) = u + ((−1) · u)
Add both sides −u in the above identity to get

−u + 0 = (−u) + u + ((−1) · u) = 0 + ((−1) · u) = (−1) · u .

Proof of 4:
Given that c · u = 0. To show: Either c = 0 or u = 0.
If c = 0, then we are done with the proof.
Suppose c 6= 0. Then, we have to show u = 0.

Page No. 6
Lecture Notes By M. G. P. Prasad, IITG

Since c 6= 0, its inverse element with respect to multiplication is well defined and is given
by (1/c). Now,
u = 1 · u = c(1/c) · u = (1/c)(c · u) = (1/c)0 = 0 .
This completes the proof.

2 Subspaces
Definition 2.1. Let V be a vector space over the field F . A subspace of V is a subset W
of V such that W is itself a vector space over the (same) field F with the (same) operations
of vector addition and scalar multiplication on V . It is denoted by W 4 V .
Examples: The subset W = R is a subspace of the vector space V = R3 .
In any vector space V , V is a subspace of V itself. The zero subspace W = {0} is a
subspace of V . These two subspaces V and {0} are called trivial subspaces of the vector
space V .
Theorem 2.1. Let V be a vector space over the field F . A non-empty subset W of V is a
subspace of V if and only if
• u + w ∈ W for any two vectors u and w in W .
• α w ∈ W for any vector w ∈ W and for any scalar α ∈ F .
Proof.
Proof of =⇒ :
Given that W is a subspace of the vector space V .
To show: u + w ∈ W for each u, w ∈ W and αw ∈ W for each w ∈ W and for each α ∈ F .
Since W is a vector space with the (same) operations of vector addition and scalar multi-
plication of V over the (same) field F , it follows that if u ∈ W , w ∈ W and α ∈ F then
u + w ∈ W and αw ∈ W .

Proof of ⇐= :
Given that u + w ∈ W for u, w in W and αw ∈ W for each w ∈ W and for each α ∈ F .
To show: W is a subspace of V .
Since W 6= Ø, there exists w ∈ W . Since w ∈ W , (−1)w ∈ W and (−1)w + w = 0 ∈ W
by the given condition.
If w ∈ W and α ∈ F , then αw ∈ W by the given condition.
Other conditions in the definitions of the vector spaces are satisfied by W , since they hold
true in V . Thus, W is subspace of V .

Theorem 2.2. Let V be a vector space over the field F . A non-empty subset W of V is
a subspace of V if and only αu + w ∈ W for any two vectors u and w in W and for any
scalar α ∈ F .

Page No. 7
Lecture Notes By M. G. P. Prasad, IITG

Proof.
Proof of =⇒ :
Given that W is a subspace of the vector space V .
To show: αu + w ∈ W for each u, w ∈ W and α ∈ F .
Since W is a subspace, it follows that αu ∈ W and hence αu + w ∈ W .

Proof of ⇐= :
Given that αu + w ∈ W for u, w in W and for α ∈ F .
To show: W is a subspace of V .
Since W 6= Ø, there exists w ∈ W . Since w ∈ W , (−1)w + w = 0 ∈ W by the given
condition.
If w ∈ W and α ∈ F , then αw + 0 = αw ∈ W .
In particular, −w ∈ W .
If w1 and w2 in W then w1 + w2 = 1w1 + w2 ∈ W . Other conditions in the definitions of
the vector spaces are satisfied by W , since they hold true in V . Thus, W is subspace of
V.

Example 1: Let V denote the space all functions from the field F to F and P denote the
space of all polynomials over the field F . Then, P is a subspace of the vector space V.
Example 2: Let P denote the space all polynomials over the real field. Let n ∈ N. Let
Pn denote the subset of P consist of all polynomials of degree at most n. Then, Pn is a
subspace of the vector space P.
Example 3: Let Mn,n (F ) denote the space of all n × n matrices over the field F . Let
Sn,n (F ) denote the subset of Mn,n (F ) consist of all symmetric matrices of size n × n.
Then, Sn×n (F ) is a subspace of the vector space Mn,n (F ).
Example 4: Let Mn,n (C) denote the space of all n × n complex matrices over the complex
field C. Let Hn,n (C) denote the subset of Mn,n (C) consist of all Hermitian (or self-adjoint)
matrices of size n × n. Observe that if A is a Hermitian matrix over the complex field then
the diagonal entries of A are real. For example, if we take α = i ∈ C, then i A does not
belong to Hn,n (C), because its diagonal entries are not real. Therefore, Hn,n (C) is NOT a
subspace of Mn,n (C) over the complex field C. However it is worth to note that Hn,n (R)
is a subspace of Mn,n (R) over the real field R.

Example 5: Let W = {(x, y, z) ∈ R3 : y = 1 + x} be a subset of the vector space R3 .


Then, W is NOT a subspace of R3 because (0, 0, 0) 6∈ W .

Example 6: Let W = {(x, y, z) ∈ R3 : y = x2 } be a subset of the vector space R3 . If


u = (1, 1, 1) ∈ W and v = (2, 4, 1) ∈ W then u + v = (3, 5, 1) 6∈ W . Therefore, W is NOT
a subspace of R3 .

Page No. 8
Lecture Notes By M. G. P. Prasad, IITG

Example 7: An n × n square matrix A is called idempotent if A2 = A.


Let W = {A ∈ M2,2 (R) : A2 = A} be a subset of the space M2,2 (R) of all 2 × 2 real
matrices over the real field. Observe that the zero matrix 02×2 and the identity matrix
I2×2 are in W and hence W is non-empty.
Observe that
     
1 1 1 0 2 1
A= ∈W and B= ∈W but A+B = 6∈ W .
0 0 1 0 1 0
Therefore, W is NOT a subspace of M2,2 (R).
Example 8: Let Mn,1 (R) denote the space of all n × 1 real matrices over the real field R.
Let A be an m × n real matrix. Define

W = {x ∈ Mn,1 (R) : Ax = 0} .

Then W is a subspace of Mn,1 (R) over the real field R. That is, the set of all solutions of
a system of homogeneous linear equations is a vector subspace.
Example 9: Let S be a non-empty subset of R. Let

V = {f : f is a function from the set S to R }

denote the space of all functions from the set S to R over the real field. It is denoted by
RS .
Let (a, b) ⊆ R. Let

C((a, b), R) = {f : (a, b) → R : f is continuous on (a, b) }

. Then C((a, b), R) is a subspace of the vector space R(a,b) of all functions from the interval
(a, b) to R.
Let
C 2 ((a, b), R) = {f : (a, b) → R : f 00 exists and continuous on (a, b) } .
Then C 2 ((a, b), R) (space of twice continuously differentiable functions on (a, b)) is a sub-
space of the vector space of C((a, b), R and also a subspace of R(a,b) .
Example 10: Let
V = {f : R → R : f 00 exists on R }
denote the vector space of twice differentiable functions over the real field. Let a and b two
real constants. Then,

W = {f ∈ V : f is a solution to the ODE: y 00 + ay 0 + b = 0 }

is the set of all solutions to the homogeneous ordinary differential equation y 00 + ay 0 + b = 0


is a subspace of the vector space V over the real field.

Page No. 9
Lecture Notes By M. G. P. Prasad, IITG

Theorem 2.3. Let V be a vector space over the field F . The intersection of any collection
of subspaces of V is a subspace of V .
\
Proof. Let Wα be a collection of subspaces of V . Let W = Wα .
α∈J
To show: W is a subspace of V . \
Since 0 ∈ Wα for each α ∈ J, 0 ∈ Wα = W . Therefore W 6= Ø.
α∈J
Let w1 , w2 in W and let c ∈ F . To show: cw1 + w2 ∈ W .
Since each Wα is a subspace of V , the vector cw1 + w2 ∈ Wα .
\
cw1 + w2 ∈ Wα for each α ∈ J =⇒ cw1 + w2 ∈ Wα = W .
α∈J

Therefore, W is subspace of V .

Note: It is worth to note that ∩{Wα : α ∈ J} is the largest subspace of V contained in


each Wα .

Important Note: Union of two subspaces need not be a subspace


Let W1 = {(x, 0) ∈ R2 : x ∈ R} and W2 = {(x, x) ∈ R2 : x ∈ R} be two subspaces
of the vector space R2 . But W1 ∪ W2 is not a subspace of R2 , because (1, 0) ∈ W1 and
(2, 2) ∈ W2 , but (1, 0) + (2, 2) = (3, 2) 6∈ W1 ∪ W2 .

Theorem 2.4. Let V be a vector space over the field F . Let W1 and W2 be subspaces of
V . Then, the set-theoretic union W1 ∪ W2 is a subspace of V if and only if W1 ⊆ W2 or
W2 ⊆ W1 . That is, one of the subspaces is contained in the other.

Proof.
Proof of =⇒:
Given that W1 ∪ W2 is a subspace of the vector space V . To show: W1 ⊆ W2 or W2 ⊆ W1 .
Suppose that W1 ⊆ W2 or W2 ⊆ W1 does not hold. Then we will arrive at a contradiction.
If W1 6⊆ W2 , then there exists x ∈ W1 \ W2 . That is, x is in W1 but not in W2 .
If W2 6⊆ W1 , then there exists y ∈ W2 \ W1 . That is, y is in W2 but not in W1 .
Since W1 ∪ W2 is a subspace, we have x + y ∈ W1 ∪ W2 .
If x + y ∈ W1 then write y = (x + y) + (−x).
Since both (x+y) and −x are elements of W1 , then their vector addition (x+y)+(−x) = y
is an element of W1 which is a contradiction to the fact that y 6∈ W1 .
If x + y ∈ W2 and −y ∈ W2 , then their vector addition (x + y) + (−y) = x is an element of
W2 which is a contradiction to the fact that x 6∈ W2 . Therefore, it follows that W1 ⊆ W2
or W2 ⊆ W1 holds.

Page No. 10
Lecture Notes By M. G. P. Prasad, IITG

Proof of ⇐=:
Given that W1 and W2 are subspaces of the vector space V . Also given that W1 ⊆ W2 or
W2 ⊆ W1 . To show: W1 ∪ W2 is a subspace of V .
If W1 ⊆ W2 then W1 ∪ W2 = W2 which is a subspace of V .
If W2 ⊆ W1 then W1 ∪ W2 = W1 which is a subspace of V .
This completes the proof.

2.1 Span of a set S


Definition 2.2. Let V be a vector space over the field of F . Let S = {v1 , v2 , · · · , vk } be
a finite subset of V . The set of all linear combinations of v1 , v2 , . . ., vk is called the span
of v1 , v2 , . . ., vk or (span of S) and it is denoted by span (S) or hSi or L(S).
If the vector space V equals to span (S), then S is called a spanning set for V and V is
said to be spanned by S.
Example: The vectors v1 = (1, 4) and v2 = (−1, 2) span the space R2 .

Important Note: Let V be a vector space over the field F . Let S be an infinite subset of
V . Each linear combination of every finite set of vectors of S is an element of the span of
S which is defined as the set of all (finite) linear combinations of S.
( m )
X
span (S) = αi vi : vi ∈ S, αi ∈ F, m is a non-negative integer .
i=1

That is, a linear combination of vectors of (finite/ infinite set) S means that linear com-
bination of a∞finite set of vectors of S. We never take an infinite linear combination of
X
vectors like αi vi in Linear Algebra Course.
i=1

Convention: The span of empty set is the zero subspace {0}.

Theorem 2.5. Let V be a vector space over the field F . Let S be a non-empty subset of
V . Then, the span of S is a subspace of V .
Proof. Let u and w be any two vectors in the span of S.
Since u ∈ span (S), there exist vectors u1 , . . ., um in S and the scalars α1 , . . ., αm such
that
u = α1 u1 + α2 u2 + · · · + αm um .
Since w ∈ span (S), there exist vectors w1 , . . ., wn in S and the scalars β1 , . . ., βn such
that
w = β1 w1 + β2 w2 + · · · + βn wn .

Page No. 11
Lecture Notes By M. G. P. Prasad, IITG

Now,
u + w = α1 u1 + α2 u2 + · · · + αm um + β1 w1 + β2 w2 + · · · + βn wn
is a linear combination of finite set of vectors in S and hence u + w ∈ span (S).
Let c be any scalar in F .
c u = cα1 u1 + cα2 u2 + · · · + cαm um = γ1 u1 + γ2 u2 + · · · + γm um
where γk = cαk , 1 ≤ k ≤ m. Since c u is a linear combination of finite set of vectors in S,
it follows that cu ∈ span (S).
Therefore, the span of S is a subspace of V .
Theorem 2.6. Let V be a vector space over the field F . Let S be a non-empty subset of
V . Then the subspace spanned by S is the intersection of all subspaces of V which contain
S.
Proof. Let \
W = {Uα : Uα is a subspace of V that contains S} .
α
To show: span (S) = W .
First we observe that span (S) is a subspace that contains S and hence it is one of Uα ’s.
Therefore, W ⊆ S.
Secondly note that span (S) only has linear combinations of vectors in S, so every vector in
span (S) has to be in every vector subspace Uα that contains all of S. That is, span (S) ⊆ Uα
for each α. Since W = ∩Uα is a subspace that contains S, it follows that span S ⊆ W .
Therefore span (S) = W .
Corollary 2.1. Let V be a vector space over the field F . Let S be a non-empty subset of
V . Then the subspace spanned by S is the smallest subspace of V which contains S.
Theorem 2.7. Let S and T be two subsets of a vector space V . Then
1. If S ⊂ T then span (S) ⊂ span (T ).
2. span (S ∪ T ) = span (S) + span (T ).
3. span (span (S)) = span (S).

2.2 Sum of Subspaces


Definition 2.3. Let V be a vector space over the field F . Let W1 , W2 , . . ., Wk be subsets
of V . Then, the sum of the subsets W1 , W2 , . . ., Wk is defined by
{w1 + w2 + · · · + wk : wi ∈ Wi , 1 ≤ i ≤ k}
k
X
and is denoted by W1 + W2 + · · · + Wk or Wi .
i=1

Page No. 12
Lecture Notes By M. G. P. Prasad, IITG

Theorem 2.8. Let V be a vector space over the field F . Let W1 , W2 , . . ., Wk be subspaces
of V . Then, the sum of the subspaces W1 , W2 , . . ., Wk is given by

W1 + W2 + · · · + Wk := {w1 + w2 + · · · + wk : wi ∈ Wi , 1 ≤ i ≤ k}

is a subspace of the vector space V which contains each of the subspaces Wi . Further, it is
a subspace spanned by W1 ∪ · · · ∪ Wn .
Proof.
Let W = W1 + W2 + · · · + Wk .
Claim 1: W is a subspace of V
The zero vector 0 belongs to each Wi and hence 0 = 0 + · · · + 0 is in W . So, W is non-
empty.
If u ∈ W , w ∈ W and c ∈ F , then we have to show that cu + w ∈ W .
Since u ∈ W , u = u1 + u2 + · · · + uk where ui ∈ Wi for 1 ≤ i ≤ k.
Since w ∈ W , w = w1 + w2 + · · · + wk where wi ∈ Wi for 1 ≤ i ≤ k.
Then, we have

cu + w = (cu1 + w1 ) + (cu2 + w2 ) + · · · + (cuk + wk ) .

Since Wi is a subspace of V for each i, the element (cui + wi ) ∈ Wi for each i. Therefore,
it follows that (cu + w) ∈ W .

Claim 2: (W1 ∪ · · · ∪ Wk ) ⊆ W
We show that Wi ⊆ W for each i.
For any vector u ∈ Wi , u can be written as the sum of zero vectors of Wj , j 6= i and the
vector u as given by u = 0 + · · · + 0 + u + 0 + · · · + 0. Therefore u ∈ W and hence Wi ⊆ W
for each i. Consequently, it follows that W1 ∪ · · · ∪ Wk ⊆ W .

Claim 3: span (W1 ∪ · · · ∪ Wk ) = W


If U is any subspace of V such that W1 ∪ · · · ∪ Wk ⊆ U , then we will show that W ⊆ U .
Let w = w1 + · · · + wk be an arbitrary element of W . We will show that w ∈ U .
For each i, we have Wi ⊆ U and hence it follows that wi ∈ U . Since U is a vector space,
w1 + w2 + · · · + wk ∈ U . Therefore w ∈ U .
Thus, W is the smallest subspace of V containing W1 ∪ · · · ∪ Wk . We already know that the
smallest subspace of V containing a set S is the span (S). Therefore, span (W1 ∪· · ·∪Wk ) =
W.
This completes the proof.
Definition 2.4. Let V be a vector space over the field F . Let W1 , W2 , . . ., Wk be subspaces
of V . Then, the sum W1 + W2 + · · · + Wk of the subspaces W1 , W2 , . . ., Wk is said to be
direct sum (or internal direct sum) if
!
X
Wi ∩ Wj = {0} for each i = 1, . . . , k .
j6=i

Page No. 13
Lecture Notes By M. G. P. Prasad, IITG

OR
Wi ∩ (W1 + W2 + · · · + Wi−1 ) = {0} for each i = 2, . . . , k .
If the sum is direct, then we write it as W1 ⊕ W2 ⊕ · · · ⊕ Wk .

Theorem 2.9. Let V be a vector space over the field F . Let W1 , W2 , . . ., Wk be subspaces
of V . Let W = W1 + · · · + Wk . Then, the following statements are equivalent.

1. W = W1 ⊕ · · · ⊕ Wk .

2. For each vector w ∈ W , there exist vectors w1 ∈ W1 , . . ., wk ∈ Wk such that

w = w1 + · · · + wk (in unique way) .

3. If w1 ∈ W1 , . . ., wk ∈ Wk are such that w1 + · · · + wk = 0 then wi = 0 for each i,


1 ≤ i ≤ k. In this case, we say that W1 , . . ., Wk are independent.

Example 1: Let V = R2 . Let W1 = {(a, 0) ∈ R2 : a ∈ R}, W2 = {(0, a) ∈ R2 : a ∈ R}


and W3 = {(a, a) ∈ R2 : a ∈ R}. Then R2 = W1 + W2 + W3 . Observe that this sum is
NOT direct, even though W1 ∩ W2 = W2 ∩ W3 = W3 ∩ W1 = {(0, 0)}.

Example 2: Let P3 denote the space of all real polynomials of degree at most 3. Let
W1 = {P ∈ P3 : P (x) = P (−x) for all x ∈ R} and W2 = {P ∈ P3 : P (x) =
−P (−x) for all x ∈ R}. Then P3 = W1 ⊕ W2 because W = W1 + W2 and W1 ∩ W2 = {0}.

Example 3: Let Mn,n (R) denote the space of all n × n real matrices. Let W1 = {A ∈
Mn,n (R) : A = AT } and W2 = {A ∈ Mn,n (R) : A = −AT }. Then, Mn,n (R) = W1 ⊕ W2
because W = W1 + W2 and W1 ∩ W2 = {0}.

Example 4: Let M2,2 (R) denote the space of all 2 × 2 real matrices. Let
  
a b
W1 = ∈ M2,2 (R) : a ∈ R, b ∈ R
−b a
  
c d
W2 = ∈ M2,2 (R) : c ∈ R, d ∈ R .
d −c
Then, M2,2 (R) = W1 ⊕ W2 because W = W1 + W2 and W1 ∩ W2 = {0}.

Page No. 14
Lecture Notes By M. G. P. Prasad, IITG

Definition 2.5. Let U and V be two vector spaces over a field F . On the cartesian product

U × V = {(u, v) : u ∈ U, v ∈ V }

of U and V , we define the vector addition and scalar multiplication as follows:

(u1 , v1 ) + (u2 , v2 ) = (u1 + u2 , v1 + v2 ) for all (u1 , v1 ), (u2 , v2 ) ∈ U × V ,

α (u, v) = (α u, α v) for all (u, v) ∈ U × V and for all α ∈ F .


Then, U × V with above defined vector addition and scalar multiplication forms a vector
space over the field F and it is denoted by U × V , and it is called the external direct sum
of U and V .

3 Linearly Dependent and Linearly Independent Sets


Definition 3.1. A (finite) set of vectors {v1 , v2 , . . . , vk } in a vector space V is linearly
dependent if there are scalars c1 , c2 , . . ., ck , at least one of which is not zero such that

c1 v1 + c2 v2 + · · · + ck vk = 0 .

A (finite) set of vectors {v1 , v2 , . . . , vk } is NOT linearly dependent is said to be linearly


independent.
Definition 3.2. A set S of vectors in a vector space V is linearly dependent if it contains
finitely many linearly dependent vectors. A set S of vectors that is NOT linearly dependent
is said to be linearly independent.

Alternative Way of Writing the Definition for Linearly Dependence and Linearly Indepen-
dence:
Definition 3.3. Let V be a vector space over a field F . A subset S of V is said to be
linearly dependent if there exist distinct vectors v1 , v2 , . . ., vn in S and scalars c1 , c2 , . . .,
cn in F with at least one ci 6= 0 such that

c1 v1 + c2 v2 + · · · + cn vn = 0 .

A set S which is NOT linearly dependent is called linearly independent.


If the set S contains only finitely many vectors v1 , v2 , . . ., vk and if S is linearly dependent
(or independent) then we may say that the vectors v1 , v2 , . . ., vk are linearly dependent
(or independent) instead of saying S is linearly dependent (or independent).

The following are easy consequences of the above definition of linearly dependent (or in-
dependent).

Page No. 15
Lecture Notes By M. G. P. Prasad, IITG

1. Any set which contains the zero vector 0 is linearly dependent.


2. Any set which contains a linearly dependent set is linearly dependent.
3. Any subset of a linearly independent set is linearly independent
4. A set of vectors is linearly independent if and only if each finite subset of S is linearly
independent if and only if for any distinct vectors v1 , v2 , . . ., vk of S, we have
c1 v1 + c2 v2 + · · · + ck vk = 0 =⇒ c1 = c2 = · · · = ck = 0 .

Example of linearly dependent set: Let u = (1, −1, 0), v = (1, 3, −1) and w = (5, 3, −2).
Let S = {u, v, w}.

Since 3u + 2v − w = 0, the set S is linearly dependent.

Example of linearly independent set: Let u = (6, 2, 3, 4), v = (0, 5, −3, 1) and
w = (0, 0, 7, −2). Let S = {u, v, w}.

Since αu + βv + γw = 0 for any scalar α, β, γ yields that α = β = γ = 0, the set S


is linearly independent.

Theorem 3.1. The non-zero vectors v1 , v2 , · · · , vm are linearly dependent if and only if
one of them, say, vi is a linear combination of the preceding vectors:
vi = β1 v1 + · · · + βi−1 vi−1 .
Proof.
Proof of =⇒:
Given that the non-zero vectors v1 , v2 , · · · , vm are linearly dependent. Then, there exists
scalars α1 , α2 , . . ., αn not all of them are zero such that
α1 v1 + α2 v2 + · · · + αn vn = 0 .
Let i be the largest integer for which αi 6= 0. That is, αi+1 = · · · = αn = 0. Therefore,
α1 v1 + α2 v2 + · · · + αi vi = 0 with αi 6= 0 .
Then,      
−α1 −α2 −αi−1
vi = v1 + v2 + · · · + vi .
αi αi αi

Proof of ⇐=:
Given that vi = β1 v1 + · · · + βi−1 vi−1 .
Since vi 6= 0, at least one βj 6= 0.
This gives that v1 , . . ., vi is linearly dependent and hence the v1 , . . ., vm is linearly depen-
dent.

Page No. 16
Lecture Notes By M. G. P. Prasad, IITG

Note: The above theorem does NOT say every vector of linearly dependent set can be
written as a linear combination of other vectors. It says that at least one vector of lin-
early dependent set can be written as a linear combination of other vectors. For example
S = {(1, 0), (0, 1), (2, 0)} is a linearly dependent set in R2 . The vector (0, 1) can not be
written as a linear combination of other two vectors. Where as, (2, 0) = 2(1, 0) + 0(0, 1).

4 Basis
Definition 4.1. Let V be a vector space. Let B be a subset of V . The set B is said to
be a basis for the vector space V if (i) B is linearly independent and (ii) span (B) = V .
Definition 4.2. A vector space V is said to be finite dimensional if V has a basis B which
is a finite set. That is, it has a finite basis.
Example 1: In the vector space Rn (or F n where F is a field), consider the set B =
{e1 , e2 , . . . , en } where e1 = (1, 0, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . ., en = (0, 0, 0, . . . , 1).
Then this set B is basis for Rn (or F n ). This particular basis is called the standard basis
for for Rn (or F n ).

Example 2: In the vector space P(R) of all real polynomials over the real field, consider
the set B = {1, x, x2 , . . .}. Then the set B is a basis for P(R).

Example 3: In the vector space C over the real field R, the set B = {1, i} is a basis for C
over R.

Example 4: In the vector space C over the complex field C, the set B = {1} is a basis for
C over C.

In the vector space M2 (R of all 2 × 2 real matrices over the real field, the set
        
1 0 0 1 0 0 0 0
B = E11 = , E12 = , E21 = , E22 =
0 0 0 0 1 0 0 1
is a basis for M2 (R.

Theorem 4.1. Let V be a vector space which is spanned by a finite set of vectors v1 , v2 ,
. . ., vm . Then any linearly independent set of vectors in V is a finite set and contains no
more than m vectors. That is, every subset S of V which contains more than m vectors is
linearly dependent.

Page No. 17
Lecture Notes By M. G. P. Prasad, IITG

Proof. To show: If S = {u1 , u2 , · · · , un } is any set with n distinct vectors where n > m
in V then S is linearly dependent.
X n
To show: There exist scalars α1 , · · · , αn (not all of them are 0) such that αj uj = 0
j=1
Since v1 , · · · , vm spans V , there exist scalars aij such that
m
X
uj = aij vi for j = 1, 2, . . . , n .
i=1

For any scalars α1 , · · · , αn , consider the equation


n
X
αj uj = 0
j=1
n n m
!
X X X
Now, αj uj = αj aij vi
j=1 j=1 i=1
n X
X m
= (αj aij ) vi
j=1 i=1
m n
!
X X
= (vi aij ) αj
i=1 j=1

It is a system of m linear equations in n unknowns (αj ’s) where n > m.


Recall: If A is an m×n matrix and m < n, then the homogenous system of linear equations
Ax = 0 has a non-trivial solution.
Therefore, there exist scalars α1 , · · · , αn not all 0 such that
n
X n
X
vi aij αj = 0 =⇒ aij αj = 0 1≤i≤m.
j=1 j=1

This gives that


n
X
αj uj = 0
j=1

with at least one αj 6= 0.


Therefore, the vectors u1 , u2 , · · · , un with n > m is linearly dependent. This completes
the proof.

Theorem 4.2. Let S be a linearly independent subset of a vector space V . Suppose u is a


vector in V which is not in span (S). Then the set S ∗ = S ∪ {u} is linearly independent.

Page No. 18
Lecture Notes By M. G. P. Prasad, IITG

Proof.
To show: S ∗ = S ∪ {u} is linearly independent
Suppose v1 , v2 , . . ., vm are distinct vectors in S. Consider the linear combination

c1 v1 + c2 v2 + · · · + cm vm + cm+1 u = 0 .

If possible, cm+1 6= 0 then


     
−c1 −c2 −cm
u= v1 + v2 + · · · + vm .
cm+1 cm+1 cm+1

Note that u 6= 0.
Then, it shows that u ∈ span (S) which is a contradiction. Therefore cm+1 = 0.
Since S is linearly independent subset of V , we have c1 = · · · = cm = 0.
Therefore, S ∗ = S ∪ {u} is linearly independent.
This completes the proof.

Theorem 4.3. Let V be a vector space spanned by a finite set of vectors S = {v1 , v2 , . . . , vm }.
Then, S contains a set B which is a basis for V .

Proof. Let us take B = S = {v1 , v2 , . . . , vm }.


If v1 = 0 then replace B by B \ {0}.
Otherwise, for 1 ≤ k ≤ m, check for vk ∈ span (v1 , v2 , . . . , vk−1 ). Whenever the answer is
yes, replace B by B\{vk } and repeat the process. The process must end in at most m steps.
Then the set B thus obtained from the set S is linearly independent and span (B) = V .
Therefore, S contains a set B which is a basis for V .

Theorem 4.4. Let V be a finite dimensional vector space. Then, any two bases of V have
the same (finite) number of vectors.

Proof.
Since V is a finite dimensional vector space, it has a (finite) basis B = {v1 , v2 , . . . , vm }.
Recall: Let V be a vector space which is spanned by a finite set of vectors v1 , v2 , . . ., vm .
Then any linearly independent set of vectors in V is a finite set and contains no more than
m vectors.
By the above result, every basis of V is finite and contains no more than m vectors. Thus
if C = {u1 , u2 , . . . , un } is a basis for V then n ≤ m.
By the same argument, it follows that m ≤ n. Therefore m = n.
This completes the proof.

Page No. 19
Lecture Notes By M. G. P. Prasad, IITG

Definition 4.3. Let V be a finite dimensional vector space. Then the dimension of V is
defined to be the number of vectors in a basis for V and it is denoted by dim V .

Examples:

• The dimension of Rn over the real field is n.

• The dimension of C over the real field is 2.

• The dimension of C over the complex field is 1.

• The dimension of Mm,n (R) over the real field is mn.

• The dimension of the space Pn (R) of all real polynomials of degree at most n over
the real field is n + 1.

• The empty set spans the zero vector space. Therefore, the dimension of {0} is zero.

Definition 4.4. A vector space V is called infinite dimensional if it is NOT finite dimen-
sional. That is, it does not have a finite set as a basis.

Example: The space P(R) of all real polynomials over the real field has a basis as
B = {1, x, x2 , x3 , . . .} which is an infinite set. Observe that no finite set is a basis for
P(R). Therefore, P(R) is an infinite dimensional vector space.

Theorem 4.5. Let V be a finite dimensional vector space and let dim V = n. Then

1. Any subset of V which contains more than n vectors is linearly dependent.

2. No subset of V which contains less than n vectors can span V .

Proof.
Proof of (1):
Let S = {u1 , u2 , . . . , um } be a subset of V such that m > n.
To show: S is linearly dependent
Recall: Let V be a vector space which is spanned by a finite set of vectors v1 , v2 , . . ., vn .
Then any linearly independent set of vectors in V is a finite set and contains no more than
n vectors. That is, every subset S of V which contains more than n vectors is linearly
dependent.
Using the above mentioned result, it follows that S is linearly dependent.

Proof of (2):

Page No. 20
Lecture Notes By M. G. P. Prasad, IITG

Since dim V = n, V has a basis B = {v1 , v2 , . . . , vn }. We know that B is linearly indepen-


dent and span (B) = V .
Suppose a subset S = {u1 , u2 , . . . , um } of V such that m < n can able to span V . Then,
there exists a subset C of S such that C is a basis for V and the dimension of V is at most
m which is less than n. It contradicts the fact that any two bases of a finite dimensional
vector space V have the same number of vectors. Therefore, no subset of V which contains
less than n vectors can span V .
This completes the proof.

Theorem 4.6. In a finite dimensional vector space V , every non-empty linearly indepen-
dent set of vectors is part of a basis.
Proof. Let the dimension of the vector space be n.
Let S0 be a non-empty linear independent set of vectors of V .
If S0 spans V then S0 is a basis for V and we are done.
If S0 does not span V , then we find a vector v1 ∈ V but not in the span of S0 and define
a set S1 = S0 ∪ {v1 }. If S1 spans V then S1 is a basis for V and we are done.
If S1 does not span V , then we find a vector v2 ∈ V but not in the span of S1 and define
a set S2 = S1 ∪ {v2 }.
If we continue in this way, then (in not more than n steps) we reach a set
Sm = Sm−1 ∪ {vm } = S0 ∪ {v1 , v2 , . . . , vm }
which is a basis for V .

Example: Let V = R3 and S = {u1 = (0, 1, −1), u2 = (1, 0, −1)}.S Note that S is a
3
linearly independent set in V = R . It can be extended as B = S {u3 = (0, 0, 1)} so
that B is a basis for V .

Theorem 4.7. If W is proper subspace of a finite dimensional vector space V , then W is


finite dimensional and dim W < dim V .
Proof.
We may suppose that W contains a vector w 6= 0. (Reason: If W contains only 0, then
dim W = 0. Since W ⊂ V (proper subset), V contains a non-zero vector and hence
dim V > 0 = dim W ).
By previous theorem, there is a basis SW for W containing w which contains no more than
dim V elements. Therefore, W is finite dimensional and dim W ≤ dim V .
Since W is a proper subspace V , there is a vector v in V which is not in W .
Then, S = SW ∪ {v} is a linearly independent subset of V and will be a part of a basis SV
for V . So, the basis set SW of W will become a proper subset of a basis set SV of V .
It gives that dim V > dim W .

Page No. 21
Lecture Notes By M. G. P. Prasad, IITG

Theorem 4.8. Let V be a finite dimensional vector space. Let B = {v1 , v2 , · · · , vn } be a


basis for V . Then, for each vector v ∈ V , there exists a unique set of scalars α1 , α2 , · · · ,
αn such that
X n
v= αj vj .
j=1

Proof.
Since v ∈ V and span (B) = V , there exist scalars α1 , α2 , · · · , αn such that
n
X
v= αj vj .
j=1

Let β1 , β2 , · · · , βn be another set of scalars such that


n
X
v= βj v j .
j=1

Then,
n
X n
X n
X
0=v−v = αj vj − βj vj = (αj − βj ) vj .
j=1 j=1 j=1

Since v1 , · · · , vn are linearly independent

αj − βj = 0 for each j = 1, · · · , n .

Therefore,
α j = βj for each j = 1, · · · , n .

Theorem 4.9. Let A be an n × n square matrix (whose entries are real numbers). If the
row vectors A1 , · · · , An of A form a linear independent set in Rn then A is invertible.

Proof. Let W denote the subspace spanned by the row vectors A1 , · · · , An . Since A1 , · · · ,
An are linearly independent, the dimension of W is n. Let ei = (0, 0, · · · , 1, · · · , 0) for
1 ≤ i ≤ n.
There exist scalars bij such that
n
X
ei = bij Aj 1≤i≤n.
j=1

Page No. 22
Lecture Notes By M. G. P. Prasad, IITG

Let B denote the matrix


B = (bij )n×n .
Then, we have
BA = I .
This completes the proof.

Various results proved earlier can be summarized as follows:

Theorem 4.10. Let V be a vector space with dim V = n. Then

1. Any linearly independent set in V contains at most n vectors.

2. An spanning set for V contains at least n vectors.

3. Any linearly independent set of exactly n vectors of V is a basis for V .

4. Any spanning set for V consisting of exactly n vectors is a basis for V .

5. Any linearly independent set in V can be extended to a basis for V .

6. Any spanning set for V can be reduced to a basis for V .

Important Theorem:

Theorem 4.11. If W1 and W2 are finite dimensional subspaces of a vector space V then
the subspace W1 + W2 is finite dimensional and

dim (W1 ) + dim (W2 ) = dim (W1 ∩ W2 ) + dim (W1 + W2 ) .

Proof.

Example: Let V be a finite dimensional vector space with dimension 5 and let W1 and W2
be distinct (in the sense that each one is not the subset of another one) four dimensional
subspaces of V . Find the dimension of W1 ∩ W2 .
Since W1 and W2 are distinct, the subspace W1 + W2 which contains the set W1 ∪ W2 has
dimension > 4. Since W1 + W2 ⊆ V , the dimension of W1 + W2 = 5 = dimV .
By above result, it follows that

dim (W1 ∩ W2 ) = dim (W1 ) + dim (W2 ) − dim (W1 + W2 )


= 4+4−5=3

Page No. 23
Lecture Notes By M. G. P. Prasad, IITG

5 Ordered Basis and Change of Coordinates


Definition 5.1. If V is a finite-dimensional vector space, an ordered basis for V is a finite
sequence of vectors which is linearly independent and spans V .

Note: “Finite Sequence” means “a finite set whose elements are numbered as: 1st element,
2nd element, · · · , n-th element, etc. That is, elements are arranged in a particular order.

Examples:
Let V = R2 and F = R.
The set {(1, 0), (0, 1)} forms a basis. Now the set arranged as a finite sequence with
elements are numbered B1 = {e1 = (1, 0), e2 = (0, 1)} forms an ordered basis of R2
The set B2 = {v1 = (1, 1), v2 = (1, −1)} is also an ordered basis for R2 .
The set B3 = {v1 = (1, 2), v2 = (4, 3)} is also an ordered basis for R2 .

Definition 5.2. Let V be a finite dimensional vector space and let dim (V ) = n.
Let B = {u1 , u2 , · · · , un } be an ordered basis for V .
For each vector v ∈ V , there exist unique scalars α1 , α2 , · · · , αn such that
n
X
v= αi ui .
i=1

Then, the matrix


 T
α1 α2 · · · αn
is called the coordinate matrix of the vector v relative to the ordered basis B and we
denote it by
[v]B .
It is also called the coordinates of the vector v relative to B .

Let B ∗ = {u∗1 , u∗2 , · · · , u∗n } be another ordered basis for V .


For each vector v ∈ V , there exist unique scalars β1 , β2 , · · · , βn such that
n
X
v= βi u∗i .
i=1

Then, the matrix


 T
[v]B∗ = β1 β2 · · · βn
is called the coordinate matrix of the vector v relative to the ordered basis B ∗ .

Page No. 24
Lecture Notes By M. G. P. Prasad, IITG

Question: Changes in the coordinates, as a change from one ordered basis to another
For a given vector v ∈ V , How to get new cooridnate matrix while changing from one
ordered basis to another ordered basis for V ? Is there any simple mechanism to do it?

Let V be a vector space. Let B = {u1 , u2 , · · · , un } ans B ∗ = {u∗1 , u∗2 , · · · , u∗n } be


two ordered bases for V . Express each u∗i in terms of the basis B = {u1 , u2 , · · · , un }.
There exist unique scalars aij such that
n
X
u∗i = aij uj , 1≤i≤n.
j=1

Let v be a given vector in V .


Let α1 , · · · , αn be the coordinates of v relative to the ordered basis B and let β1 , · · · , βn
be the coordinates of v relative to the ordered basis B ∗ . Then,

v = β1 u∗1 + · · · + βn u∗n
Xn
= βi u∗i
i=1
n n
!
X X
= βi aij uj
i=1 j=1
n
XX n
= aij βi uj
i=1 j=1
n n
!
X X
= aij βi uj
j=1 i=1

It gives that
n
X
αj = aij βi , 1≤j≤n.
i=1

Let A be the n × n matrix whose ij entry is the above mentioned scalar aij . Let α =
(α1 , · · · , αn )T and β = (β1 , · · · , βn )T be the coordinate matrices of the vector v in the
ordered bases B and B ∗ . Then
α = Aβ .
Since B and B ∗ are linearly independent sets α = 0 if and only if β = 0. It shows that A
is invertible. Therefore,
β = A−1 α .
The matrix A is called the transition matrix (for the coordinates) from B ∗ to B.
The above discussion is formulated as follows:

Page No. 25
Lecture Notes By M. G. P. Prasad, IITG

Theorem 5.1. Let V be an n-dimensional vector space and let B = {u1 , u2 , · · · , un }


and B ∗ = {u∗1 , u∗2 , · · · , u∗n } be two ordered bases for V . Then there is a unique, invertible
n × n square matrix A such that

[v]B = A [v]B∗
[v]B∗ = A−1 [v]B

for every vector v ∈ V .


The columns of this matrix A are given by

aj = u∗j B
 
for j = 1, · · · , n .

Thus
A = [u∗1 ]B [u∗2 ]B · · · [u∗n ]B .
 

Example:
Let V = R2 . Let B = {u1 = (1, 1), u2 = (1, −1)} and B ∗ = {u∗1 = (1, 2), u∗2 = (4, 3)}
be two ordered bases for R2 .
Find the coordinate matrix of the vector v = (4, 2) relative to each of the above basis.
 
3
[v]B =
1
 −4 
5
[v]B∗ =  
6
5

Find the transition matrix from B ∗ to B.


3 7
1 2
[u∗1 ]B =   and [u∗2 ]B =  
−1 1
2 2

The transition matrix from B ∗ to B is given by


 3 7
2 2
A=  .
−1 1
2 2

This gives that 3   −4 


  7
3 2 2 5
= [v]B = A [v]B∗ =     .
1 −1 1 6
2 2 5

Page No. 26
Lecture Notes By M. G. P. Prasad, IITG

Theorem 5.2. Let V be an n-dimensional vector space and let B = {u1 , u2 , · · · , un } be


an ordered basis for V . Suppose A is an n × n invertible matrix. Then there is a unique
ordered basis B ∗ = {u∗1 , u∗2 , · · · , u∗n } of V such that

[v]B = A [v]B∗
[v]B∗ = A−1 [v]B

for every vector v ∈ V .

Page No. 27

Potrebbero piacerti anche