Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
PAPER-I: MT-221
LINEAR ALGEBRA
Panel of Authors
Dr. K. C. Takale (Convenor)
RNC Arts, JDB Commerce and
NSC Science College, Nashik-Road.
B. B. Divate
H.P.T. Arts and R.Y.K. Science
College, Nashik.
K. S. Borase
RNC Arts, JDB Commerce and
NSC Science College, Nashik-Road.
Editors
Dr. P. M. Avhad Dr. S. A. Katre
-Authors
Acknowledgment
We sincerely thank the following University authorities (Savitribai Phule Pune Uni-
versity, Pune) for their constant motivation and valuable guidance in the preparation
of this book.
• Mr. Dattatraya Kute, Senate Member, Savitribai Phule Pune University; Man-
ager, Savitribai Phule Pune University, Pune Press.
Text book: Prepared by the BOS Mathematics, Savitribai Phule Pune University,
Pune.
ii
Recommended Book: Matrix and Linear Algebra aided with MATLAB, Kanti
Bhushan Datta, PHI Learning Pvt.Ltd, New Delhi(2009).
Sections:5.1,5.2,5.3,5.4,5.5,5.7,6.1,6.2,6.3,6.4
Reference Books:
1. Howard Anton, Chris Rorres., Elementary Linear Algebra, John Wiley and
Sons, Inc.
2. K. Hoffmann and R. Kunze Linear Algebra, Second Ed. Prentice Hall of India,
New Delhi, (1998).
5. G. Strang, Linear Algebra and its Applications. Third Ed. Harcourt Brace
Jovanovich, Orlando, (1988).
Contents
1 VECTOR SPACES 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Definitions and Examples of a Vector Space . . . . . . . . . . . . . . 2
1.3 Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4 Linear Dependence and Independence . . . . . . . . . . . . . . . . . . 26
1.5 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.6 Vector Space as a Direct Sum of Subspaces . . . . . . . . . . . . . . . 48
1.7 Null Space and Range Space . . . . . . . . . . . . . . . . . . . . . . . 50
3 LINEAR TRANSFORMATION 86
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2 Definition and Examples of Linear Transformation . . . . . . . . . . . 86
3.3 Properties of Linear Transformation . . . . . . . . . . . . . . . . . . . 95
3.4 Equality of Two Linear Transformations . . . . . . . . . . . . . . . . 96
3.5 Kernel And Rank of Linear Transformation . . . . . . . . . . . . . . . 102
3.6 Composite Linear Transformation . . . . . . . . . . . . . . . . . . . . 112
3.7 Inverse of Linear Transformation . . . . . . . . . . . . . . . . . . . . 114
3.8 Matrix of Linear transformation . . . . . . . . . . . . . . . . . . . . . 117
iii
CONTENTS iv
VECTOR SPACES
1.1 Introduction
Vectors were first used about 1636 in 2D and 3D to describe geometrical operations
by Rene Descartes and Pierre de Fermat. In 1857 the notation of vectors and
matrices was unified by Arthur Cayley. Giuseppe Peano was the first to give
the modern definition of vector space in 1888, and Henri Lebesgue (about 1900)
applied this theory to describe functional spaces as vector spaces.
Linear algebra is the study of a certain algebraic structure called a vector space. A
good argument could be made that linear algebra is the most useful subject in all of
mathematics and that it exceeds even courses like calculus in its significance. It is
used extensively in applied mathematics and engineering. It is also fundamental in
pure mathematics areas like number theory, functional analysis, geometric measure
theory, and differential geometry. Even calculus cannot be correctly understood
without it. For example, the derivative of a function of many variables is an exam-
ple of a linear transformation, and this is the way it must be understood as soon as
you consider functions of more than one variable. It is difficult to think a mathe-
matical tool with more applications than vector spaces. Thanks to the contributors
1
CHAPTER 1. VECTOR SPACES 2
of Linear Algebra as we may sum forces, control devices, model complex systems,
denoise images etc. They underlie all these processes and it is thank to them as
we can ”nicely” operate with vectors. These are the mathematical structures that
generalizes many other useful structures.
1. a + b ∈ F (F is closed under +)
2. a + b = b + a ( + is commutative operation on F )
3. a + (b + c) = (a + b) + c ( + is associative operation on F )
Example 1.1. (i) Set of rational numbers Q, set of real numbers R, set of complex
numbers C are all fields with respective usual addition and usual multiplication.
CHAPTER 1. VECTOR SPACES 3
(ii) For prime number p, (Zp , +p , ×p ) is a field with respect to addition modulo p
and multiplication modulo p operation.
Definition 1.2. Vector Space: A nonempty set of vectors V with vector addition
(+) and scalar multiplication operation (.) is called a vector space over field F if it
satisfying the following axioms for any u, v, w ∈ V and α, β, ∈ F
1. C1: u + v ∈ V (V is closed under +)
5. A3: Existence of zero vector: There exists a zero vector 0̄ ∈ V such that
u + 0̄ = u.
6. A4: Existence of negative vector: For given vector u ∈ V , there exists the vector
−u ∈ V such that u + (−u) = 0̄.
(This vector −u is called as negative vector of u in V .)
(iii) A vector (v1 , v2 , · · · , vn ) (in its most general form) is an element of a vector
space.
Example 1.2. Let V = R2 = {u = (u1 , u2 )/u1 , u2 ∈ R}.
For u = (u1 , u2 ) , v = (v1 , v2 ) ∈ V and α ∈ R,
u + v = (u1 + v1 , u2 + v2 ) and α.u = (αu1 , αu2 ).
Show that V is a real vector space with respect to defined operations.
Solution: Let u = (u1 , u2 ) , v = (v1 , v2 ) and w = (w1 , w2 ) ∈ V and α, β ∈ R, then
C1 : u + v = (u1 + v1 , u2 + v2 ) ∈ V (∵ ui + vi ∈ R, ∀ ui , vi ∈ R)
A1 : Commutativity:
u + v = (u1 + v1 , u2 + v2 ) (by definition of + on V )
= (v1 + u1 , v2 + u2 ) (+ is commutative operation on R)
=v+u (by definition of + on V ).
Thus, A1 holds.
A2 : Associativity:
(u + v) + w = (u1 + v1 , u2 + v2 ) + (w1 , w2 ) (by definition of + on V )
= ((u1 + v1 ) + w1 , (u2 + v2 ) + w2 ) (by definition of + on V )
= (u1 + (v1 + w1 ), u2 + (v2 + w2 )) (+ is associativity on R)
= (u1 , u2 ) + (v1 + w1 , v2 + w2 ) (by definition of + on V )
= u + (v + w) (by definition of + on V ).
Thus, A2 holds.
M1 :
α.(u + v) = α.(u1 + v1 , u2 + v2 )
= (α(u1 + v1 ), α(u2 + v2 )) (by definition of . onV )
= (αu1 + αv1 , αu2 + αv2 ) (by distributive law in F = R)
= (αu1 , αu2 ) + (αv1 , αv2 ) (by definition of + onV )
= α(u1 , u2 ) + α(v1 , v2 ) (by definition of . onV )
= α.u + α.v
Thus, M1 holds.
M2 :
(α + β)u = (α + β).(u1 , u2 )
= ((α + β)u1 , (α + β)u2 ) (by definition of . onV )
= (αu1 + βu1 , αu2 + βu2 ) (by distributive law in F = R)
= (αu1 , αu2 ) + (βu1 , βv2 ) (by definition of + onV )
= α.(u1 , u2 ) + β.(u1 , u2 ) (by definition of . onV )
= α.u + β.v.
Thus, M2 holds.
M3 :
(αβ).u = (αβ).(u1 , u2 )
= ((αβ).u1 , (αβ).u2 ) (by definition of . onV )
= (α(βu1 ), α(βu2 )) (by associative law on F = R)
= α(βu1 , βu2 ) (by definition of . onV )
= α.(β.u)
Thus, M3 holds.
M4 : For 1 ∈ R,
1.u = (1.u1 , 1.u2 ) = (u1 , u2 ) = u.
Therefore, M 4 holds.
CHAPTER 1. VECTOR SPACES 6
u + v = (u1 + v1 , u2 + v2 , · · · , un + vn )
α.u = (αu1 , αu2 , · · · , αun )
then V is a real vector space. This vector space is called as an Euclidean vector
space.
u+v =v+u (u + v) + w = u + (v + w)
Example 1.7. Polynomials of degree ≤ n (Pn ) :
Let (Pn ) be the set of all polynomials of degree n and u(x) = u0 +u1 x+u2 x2 +...+un xn .
Define the sum among two vectors and the multiplication by a scalar as
(u + v)(x) = u(x) + v(x)
= (u0 + v0 ) + (u1 + v1 )x + (u2 + v2 )x2 + ... + (un + vn )xn
and (αu)(x) = αu0 + αu1 x + αu2 x2 + ... + αun xn
Legendre polynomials.
CHAPTER 1. VECTOR SPACES 8
Example 1.10. Let V = Q/R/C, then V is a real vector space over field Q.
x + y = xy and α.x = xα
x + y = xy ∈ R+
α.x = xα ∈ R+
x + y = xy
= yx
=y+x
x + (y + z) = x(yz)
= (xy)z
= (x + y) + z
CHAPTER 1. VECTOR SPACES 9
M2:
(α + β).x = xα+β
= xα xβ
= α.x + β.x
M3:
α.(β.x) = (xβ )α
= xβα
= xαβ
= (αβ).x
CHAPTER 1. VECTOR SPACES 10
M4:
1.x = x1
=x
Thus V = R+ satisfy all axioms of a vector space over R with respect to defined
operations. Therefore, V = R+ is a real vector space.
Note:
(i) The set V = {x ∈ R/x ≥ 0} is not a vector space with respect to above defined
operations because 0 has no negative element.
(ii) V = R is not a vector space over R with respect to above defined operation
because C2 fails for x < 0, α = 12 .
Example 1.13. Let V = Mm×n (R) = Set of all m × n matrices with real entries.
Then V is a real vector space with respect to usual addition of matrices and usual
scalar multiplication.
Theorem 1.1. Let V be a vector space over field F .Then for any u ∈ V and α ∈ F
(ii) α.0̄ = 0̄
Proof. (i)
(ii)
If α ̸= 0 then
1 1
.(α.u) = .0̄
α α
1
( .α).u = 0̄
α
1.u = 0̄
∴ u = 0̄
Hence, the proof is completed.
Illustrations
Example 1.17. Let V = R3 , define operations + and . as
(x, y, z) + (x′ , y ′ , z ′ ) = (x + x′ , y + y ′ , z + z ′ )
α.(x, y, z) = (αx, y, z)
Is V a real vector space?
Solution: Clearly, C1 , C2 , A1 , A2 , A3 , and A4 are obvious.
M3: Let
α.(β.u) = α.(βx, y, z)
= (αβx, y, z)
= αβ.(x, y, z)
= (αβ).u
M4: Let
1.u = (1.x, y, z)
= (x, y, z)
=u
∴ 1.u. = u
Thus V = R3 satisfy all axioms except M2 of a vector space over field R with respect
to defined operations.
Therefore, V = R3 is not real vector space.
u + v = (x + x′ , y + y ′ , z + z ′ )
α.u = (0, 0, 0)
Then 1.u ̸= u, for u ̸= 0̄
∴ M4 is not satisfied.
∴ V = R3 is not a vector space.
+ and . as follows
u + v = (x + x′ , y + y ′ , z + z ′ )
α.u = (2αx, 2αy, 2αz)
Then α(β.u) ̸= (αβ).u, for u ̸= 0̄
and 1.u ̸= u for u ̸= 0̄
∴ V = R3 is not a vector space.
Example 1.20. Let V = {x ∈ R/x ≥ 0} is not a vector space with respect to usual
addition and scalar multiplication, because existence of negative vector axiom fails.
Example 1.21. Let V = R2 , and for u = (x, y), v = (x′ , y ′ ) and α ∈ R, we define
+ and . as follows
u + v = (x + x′ + 1, y + y ′ + 1)
α.(x, y) = (αx, αy)
A3: Let
u+v =u
⇒ v = (−1, −1) = 0̄
i.e. u + 0̄ = u, where 0̄ = (−1, −1)
∴ A3 hold.
A4: Let
u + v = 0̄
⇒ x + x′ + 1 = −1 y + y ′ + 1 = −1
∴ x′ = −x − 2, y ′ = −y − 2
∴ v = (−x − 2, −y − 2) = −u, is negative vector of u ∈ R2
∴ u + v = 0̄, where v = (−x − 2, −y − 2)
∴ A4 hold.
CHAPTER 1. VECTOR SPACES 15
M1: Let
α.(u + v) = α.(x + x′ + 1, y + y ′ + 1)
= (α(x + x′ + 1), α(y + y ′ + 1)
= (αx + αx′ + α, αy + αy ′ + α) (1.1)
M2: Similarly,
Example 1.24. Let V = {(1, x)/x ∈ R}, and for u = (1, x), v = (1, y) ∈ V and
α ∈ R, we define + and . as follows
u + v = (1, x + y)
α.u = (1, αx)
A3: Let
u+v =u
(1, x + y) = (1, x)
⇒x+y =x
⇒y=0
∴ v = (1, 0) ∈ V is zero
∴ A3 hold.
A4: Let
u + v = (1, 0)
⇒ (1, x + y) = (1, 0)
∴ x+y =0
⇒ y = −x
∴ v = (1, −x) = −u, is negative vector of u ∈ V .
∴ A4 hold.
M1: Let
M2:
M3:
M4:
u + v = (xx′ , yy ′ )
α.u = (αx, αy)
A3: Let
u+v =u
(xx′ , yy ′ ) = (x, y)
⇒ xx′ = x and yy ′ = y
⇒ x′ = 1, y ′ = 1
∴ 0̄ = (1, 1) ∈ V = R2 is a zero element.
∴ A3 hold.
CHAPTER 1. VECTOR SPACES 18
A4: Let
u + v = (1, 1)
⇒ (xx′ , yy ′ ) = (1, 1)
1 1
∴ x′ = , y ′ = , (x ̸= 0, y ̸= 0)
x y
∴ (0, 0) has no negative vector.
∴ A4 does not hold.
M1: Let
α.(u + v) = α.(xx′ , yy ′ ) = (αxx′ , αyy ′ )
and α.u + α.v = (αx, αy) + (αx′ , αy ′ ) = (α2 xx′ , α2 yy ′ )
∴ α.(u + v) ̸= α.u + α.v
∴ M1 does not hold.
M2:
(α + β).u = ((α + β)x, (α + β)y)
and α.u + β.u = (αx, αy) + (βx, βy)
= (αβx2 , αβy 2 )
∴ (α + β).u ̸= α.u + β.u
∴ M 2 fail.
M3:
α.(β.u) = α(βx, βy) = (αβx, αβy)
and (αβ).u = (αβx, αβy)
= (αβ).u
∴ M3 hold.
M4:
1.u = (1.x, 1.y) = (x, y)
=u
∴ M4 hold.
CHAPTER 1. VECTOR SPACES 19
u + v = (x + x′ , y + y ′ )
α.u = (αx, 0)
Then only M4 fail, therefore, V = R2 is not a real vector space. This is called a
weak vector space.
Example 1.29. Show that all points of R2 lying on a line is a vector space with
respect to standard operation of a vector addition and scalar multiplication, exactly
when line passes through the origin.
Solution: Let W = {(x, y)/y = mx} then W represent the line passing through
origin with slope m, that is the line passing through origin is a set
Exercise:1.1
(i) Determine which of following sets are vector spaces under the given operations.
For those that are not, list all axioms that fail to hold.
(ii) Let Ω be a nonempty set and let V consist of all functions defined on Ω which
have values in some field F . The vector operations are defined as follows
for any f , g ∈ V and any scalar α ∈ R. Then verify that V with these
operations is a vector space.
(iii) Using the axioms of a vector space V , prove the following for u, v, w ∈ V :
(a) u + w = v + w implies u = v.
(b) αu = βu and u ̸= 0̄ implies α = β, for any α, β ∈ R.
(c) αu = αv and α ̸= 0 implies u = v, for any α ∈ R.
1.3 Subspace
Definition 1.3. A nonempty subset W of a vector space V is said to be a subspace
of V if W itself is a vector space with respect to operations defined on V .
CHAPTER 1. VECTOR SPACES 21
Remark 1.2. Any subset which does not contain the zero vector cannot be a subspace
because it won′ t be a vector space.
Necessary and Sufficient condition for subspace:
Theorem 1.2. A non-empty subset W of a vector space V is a subspace of V if and
only if W satisfy the following
C1 : w1 + w2 ∈ W, ∀ w1 , w2 ∈ W
C2 : kw ∈ W, ∀ w ∈ W and k in F.
Proof. (i) Necessary Condition: If W is a subspace of a vector space V then W
itself is a vector space with respect to operations defined on V .
∴ W satisfy C1 and C2 as required.
(ii) Sufficient Condition: Conversely, suppose W is a non-empty subset of vector
space V satisfying C1 and C2 then
(a) For 0 ∈ R, w ∈ W
0.w = 0̄ ∈ W.
∴ A3 is satisfied.
(b) For w ∈ W, −1 ∈ R
−1.w = −w ∈ W.
∴ A4 is satisfied.
Now as commutative, associative and distributive laws are inherited from superset
V to subset W .
∴ A1 , A2 , M1 , M2 , M3 , M4 are hold in W .
Thus, W satisfy all axioms of vector space.
∴ W is a subspace of V .
Theorem 1.3. A non-empty subset W of a vector space V is a subspace of V if and
only if αw1 + βw2 ∈ W ∀α, β ∈ F , w1 , w2 ∈ W .
Proof. Suppose W is subspace of a vector space V then W itself is a vector space
with respect to operations defined on V .
∴ by C2 , for α, β ∈ F , w1 , w2 ∈ W
∴ αw1 , βw2 ∈ W
CHAPTER 1. VECTOR SPACES 22
Therefore, by C1
αw1 + βw2 ∈ W ∀α, β ∈ F, w1 , w2 ∈ W
Conversely, suppose W is a non-empty subset of vector space V such that
αw1 + βw2 ∈ W, ∀α, β ∈ F, w1 , w2 ∈ W
Then
(i) for α, β = 1, we get
αw1 + βw2 = 1.w1 + 1.w2
= w1 + w2 ∈ W
∴ C1 hold.
(ii) For α ∈ F , β = 0 ∈ F
αw1 + βw2 = α.w1 + 0.w2
= α.w1 + 0̄
= α.w1 ∈ W
∴ α.w1 ∈ W
∴ C2 hold.
For α, β = 0
αw1 + βw2 = 0.w1 + 0.w2
= 0̄ + 0̄
= 0̄ ∈ W
∴ 0̄ ∈ W
∴ A3 is satisfied.
For α = −1 ∈ F , β = 0 ∈ F
αw1 + βw2 = −1.w1 + 0.w2
= −w1 + 0̄
= −w1 ∈ W
∴ − w1 ∈ W, ∀w1 ∈ W
∴ A4 is satisfied.
CHAPTER 1. VECTOR SPACES 23
Since, commutative, associative and distributive laws are inherited from superset V
to subset W .
∴ A1 , A2 , M1 , M2 , M3 , M4 are hold in W .
Thus, W satisfy all axioms of vector space.
∴ W is a subspace of V .
w = α1 v1 + α2 v2 · · · + αn vn
Theorem 1.4. If S is nonempty subset of a vector space V , then L(S) is the smallest
subspace of V containing S.
{ n }
∑
Proof. Let S = {v1 , v2 , · · · , vn } ⊆ V , then L(S) = αi vi /αi ∈ F ⊆ V .
i=1
∑
n
For each αi = 0, αi vi = 0̄ ∈ L(S).
i=1
∴ L(S) ̸= ϕ (1.3)
Moreover,
∑
n ∑
n ∑
n
k.w = k αi vi = kαi vi = αi′ vi
i=1 i=1 i=1
∴ kw ∈ L(S). (1.6)
From equations (1.3), (1.4), (1.5) and (1.6), L(S) is the subspace of V containing S.
Now, suppose W is other subspace of V containing S.
∑
n
Then for v1 , v2 , · · · vn ∈ S ⊂ W , αi vi ∈ L(S) ⊂ W for any αi′ s ∈ R.
i=1
∴ L(S) ⊂ W
∴ 0̄ ∈ W and W ̸= ϕ (1.7)
Therefore, from (1.7) and (1.8) and sufficient condition for subspace, W is a subspace
of Rn .
Exercise: 1.2
independent if
∑
n
αi vi = 0̄ ⇒ αi = 0, ∀ i = 1, 2, · · · , n.
i=1
If there exist some nonzero values among α1 , α2 , · · · , αn such that
∑n
αi vi = 0̄
i=1
then set S is said to be linearly dependent set in the vector space V .
Example 1.37. Show that the set of vectors S = {(1, 2, 0), (0, 3, 1), (−1, 0, 1)} is
linearly independent in an Euclidean space R3 .
Solution: For linear independence, we consider
a(1, 2, 0) + b(0, 3, 1)+c(−1, 0, 1) = 0̄
⇒a−c=0
2a + 3b = 0
b+c=0
Solving these equations, we get
a = 0, b = 0, c = 0.
Therefore, set of vectors S = {(1, 2, 0), (0, 3, 1), (−1, 0, 1)} is linearly independent
set in R3 .
Example 1.38. Show that the set of vectors S = {(1, 3, 2), (1, −7, −8), (2, 1, −1)}
is a linearly dependent set in R3 .
Solution: For linear dependence, we consider
a(1, 3, 2) + b(1, −7, −8)+c(2, 1, −1) = 0̄
⇒ a + b + 2c = 0
3a − 7b + c = 0
2a − 8b − c = 0
This homogeneous system has a nonzero solution as follows
a = 3, b = 1, c = −2.
Therefore, set of vectors S = {(1, 3, 2), (1, −7, −8), (2, 1, −1)} is linearly dependent
set in R3 .
CHAPTER 1. VECTOR SPACES 28
Example 1.39. Find t, for which u = (cost, sint), v = (−sint, cost) forms a
linearly independent set in R2 .
αu + βv = 0̄
α(cost, sint) + β(−sint,cost) = 0̄
⇒ αcost − βsint = 0
αsint + βcost = 0
Example 1.40. Find t, for which u = (cost, sint), v = (sint, cost) form a linearly
independent set in R2 .
αu + βv = 0̄
α(cost, sint) + β(sint,cost) = 0̄
⇒ αcost + βsint = 0
αsint + βcost = 0
Example 1.42. In the vector space of continuous functions, consider the vectors
f1 (x) = sin(x)cos(x) and f2 (x) = sin(2x). The set {f1 (x), f2 (x)} is linearly depen-
dent because f2 (x) = 2f1 (x).
MATLAB:
x = [−π : 0.001 : π]
f1 = sin(x) ∗ cos(x);
f2 = sin(2 ∗ x);
plot(x, f1 , x, f2 )
CHAPTER 1. VECTOR SPACES 30
Example 1.43. In the vector space C(R), of continuous functions over R, the
vectors f1 (x) = sinx and f2 (x) = cosx are linearly independent because f2 (x) ̸=
cf1 (x) for any real c.
(i) L(B) = V
a1 e1 + a2 e2 + · · · + an en = 0̄
⇒ (a1 , a2 , · · · , an ) = (0, 0, · · · , 0)
∴ a1 = a2 = · · · = an = 0
u = u1 e1 + u2 e2 + · · · + un en
∴ L(B) = Rn .
From (i) and (ii), we prove B = {e1 , e2 , · · · , en } is a basis for Rn .
This basis is called as natural or standard basis for Rn .
Example 1.45. Let C be a real vector space. Show that B = {2 + 3i, 5 − 6i} is a
basis for C.
a = b = 0.
Therefore, B is linearly independent.
(ii) For any u = x + iy = (x, y) ∈ C, is written as
a + b + 5c = 0
b − 2c = 0
⇒ |A| = −9 ̸= 0.
Therefore, the above system has only trivial solution as follows
a = b = c = 0.
1
∴ a = − (2a0 − 4a1 − 3a2 ),
9
1
b = − (−5a0 + a1 + 3a2 ),
9
1
c = − (−a0 + 2a1 − 3a2 ) ∈ R
9
such that
a0 + a1 x + a2 x2 = ap1 + bp2 + cp3 .
∴ L(B) = P2 .
Hence, from (i) and (ii), we prove B is basis for P2 .
CHAPTER 1. VECTOR SPACES 35
Theorem 1.5. If B = {v1 , v2 , · · · , vn } is a basis for the vector space V , then any
vector v ∈ V is uniquely expressed as linear combination of the basis vector.
∑
n ∑
n
Proof. Suppose v = αi vi = βi vi for some αi′ s , βi′ s ∈ F .
i=1 i=1
Then
∑
n ∑
n
αi vi − βi vi = 0̄
i=1 i=1
∑
n
∴ (αi − βi )vi = 0̄
i=1
As B is a basis and hence linearly independent set
∴ αi − βi = 0, ∀ i = 1, 2, · · · , n.
∴ αi = βi , ∀ i = 1, 2, · · · , n.
(v)B = (α1 , α2 , · · · , αn )
Example 1.49. Let B = {(1, 0), (1, 2)} be a basis of R2 and (x)B = (−2, 3), then
( ) ( ) ( )
1 1 1
x = −2b1 + 3b2 = −2 +3 =
0 2 6
Let x = (4, 5) and the basis B = {(2, 1), (−1, 1)}. We need to find c1 and c2 such
that
x = c1 b1 + c2 b2
( ) ( ) ( ) ( )( )
4 2 −1 2 −1 c1
⇒ = c1 + c2 =
5 1 1 1 1 c2
⇒ 2c1 − c2 = 4
c1 + c2 = 5.
c1 = 3, c2 = 2.
Example 1.51. Find co-ordinate vectors of (1, 0, 0), (0, 1, 0), (0, 0, 1) and (1, 1, −1)
with respect to basis B = {(1, 2, 1), (2, 1, 0), (1, −1, 2)} of Euclidean space R3 .
follows
( )
1
∴ (v)B = (a, b, c) = − (2x − 4y − 3z), (−5x + y + 3z), (−x + 2y − 3z)
9
1
∴ (1, 0, 0)B = (−2, 5, 1)
9
1
(0, 1, 0)B = (4, −1, −2)
9
1
(0, 0, 1)B = (1, −1, 1)
3
1
(1, 1, −1)B = (−1, 7, −4).
9
{( ) ( ) ( ) ( )}
1 0 1 1 1 1 1 1
Example 1.52. Let B = , , , be basis for
0 0 0 0 1 0 1 1
a vector space M2 (R). Find co-ordinate vectors of matrices
( ) ( ) ( )
2 −3 −1 0 2 1
, , .
−1 0 0 1 5 −1
( ) ( ) ( ) ( ) ( )
x y 1 0 1 1 1 1 1 1
Solution: Let =a +b +c +d .
z w 0 0 0 0 1 0 1 1
This gives the following system of equations
a+b+c+d=x
b+c+d=y
c+d=z
d=w
a = x − y, b = y − z, c = z − w, d = w.
The co-ordinate vectors of general and given vectors to the relative basis are as
CHAPTER 1. VECTOR SPACES 38
follows
[( )]
x y
∴ = (a, b, c, d) = (x − y, y − z, z − w, w)
z w
[( ) ]B
2 −3
= (5, −2, −1, 0)
−1 0
[( ) ]B
−1 3
= (−1, 0, −1, 1)
0 1
[( ) ]B
2 1
= (1, −4, 6, −1).
5 −1 B
(iv) dim{P} = ∞
(ii) There are infinite subspaces of dimension 1 (all lines going through the origin).
(iii) There are infinite subspaces of dimension 2 (all planes going through the origin).
Subspaces of R3 .
Theorem 1.6. If W ⊆ V is a vector subspace of a vector space V then,
dim{W } ≤ dim{V }.
Theorem 1.7. Let V be a n-dimensional vector space (n ≥ 1). Then
(i) Any linearly independent subset of V with n elements is a basis.
(ii) Any subset of V with n elements which spans V is a basis.
Proof. : (i) Suppose S = {v1 , v2 , · · · , vn } is a linearly independent set in V .
To prove S is basis of V , it is sufficient to prove L(S) = V .
If v = vi ∈ S, 1 ≤ i ≤ n then v ∈ L(S).
If v ∈ V −S then S ′ = S ∪{v} then S ′ contain n+1 vectors greater than dim.V = n.
Therefore, S ′ is linearly dependent set.
∑
n
∴ αi vi + αv = 0̄
i=1
⇒ atleast one αi or α = 0
If α = 0 then,
∑
n
αi vi = 0̄ ⇒ αi = 0, ∀ i as S is linearly independent.
i=1
∴ α ̸= 0.
CHAPTER 1. VECTOR SPACES 40
∑
n
αv = − αi vi
i=1
n (
∑ )
αi
v= − vi .
i=1
α
Thus, any vector in V is expressed as linear combination of vectors in S.
Therefore, L(S) = V .
(ii) Let S be subset of a n- dimensional vector space V containing n vectors such
that L(S) = V .
To prove S is a basis of V , it is sufficient to prove that S is linearly independent.
On contrary, suppose S is linearly dependent set then any linearly independent set
in V contains number of vectors less than n.
Therefore, dim.V < n, contradicts to hypothesis.
Hence, S is linearly independent.
c1 v1 + c2 v2 + c3 v3 = 0̄
Example 1.56. (i) Let B = {(1, 0, 0), (2, 3, 0)} is a set of 2 linearly independent
vectors. But it cannot span R3 because for this we need 3 vectors. so, it can
not be a basis.
(ii) Let B = {(1, 0, 0), (2, 3, 0), (4, 5, 6)} is a set of 3 linearly independent vectors
that spans R3 , so it is a basis of R3 .
(iii) Let B = {(1, 0, 0), (2, 3, 0), (4, 5, 6), (7, 8, 9)} is a set of 4 linearly dependent
vectors that spans R3 , so it cannot be a basis as n(B) = 4 > 3 = dim.R3 .
Theorem 1.8. Any two bases of finite dimensional vector space V has same number
of elements.
Proof. Let B = {v1 , v2 , · · · , vn } and B ′ = {u1 , u2 , · · · , um } be the bases of the vector
space V .
Then B and B ′ are linearly independent sets.
Now, if B is basis and B ′ is linearly independent set then
Thus, any two bases of finite dimensional vector space V has same number of ele-
ments, is proved.
Note: From definition of linear dependence set and basis, we have
(i) If B is a basis for n dimensional vector space V and S ⊆ V such that
No. of elements in S > No. of elements in V then S is linearly dependent.
If αr ̸= 0 then
∑
n
αr vr = −αi vi
i=1,i̸=r
∑ n
αi
∴ vr = (− )vi .
αr
i=1,i̸=r
Proof. Suppose
k1 v1 + k2 v2 + · · · + kr vr = 0̄
k1 (v11 , v12 , · · · , v1n ) + k2 (v21 , v22 , · · · v2n ) + · · ·
+kr (vr1 , vr2 , · · · , vrn ) = 0̄
S ′ = {v1 , v2 , · · · vr , vr+1 , · · · , vn } of V.
S1 = {v1 , v2 , · · · , vr , vr+1 }
Exercise: 1.3
(i) Determine which of following are linear combinations of u = (0, −1, 2) and
v = (1, 3, −1) ?
(a) (2, 2, 2) (b) (3, 1, 5) c) (0, 4, 5) (d) (0, 0, 0).
(ii) Express the following as linear combinations of u = (2, 1, 4) , v = (1, −1, 3) and
w = (3, 2, 5).
(a) (−9, −7, −15) (b) (6, 11, 6) (c) (7, 8, 9) (d) (0, 0, 0).
(vii) Let S = {(2, 1, 0, 3), (3, −1, 5, 2), (−1, 0, 2, 1)}. Which of the following vectors
are in linear span of S ?
(a)(2, 3, −7, 3) (b)(1, 1, 1, 1) (c)(−4, 6, −13, 4).
(viii) By inspection, explain why the following are linearly dependent sets of vectors?
(x) Show that the vectors v1 = (0, 3, 1, −1), v2 = (6, 0, 5, 1), v3 = (4, −7, 1, 3) form
a linearly dependent set in R4 . Express each vector as a linear combination of
remaining two.
(xi) For which values of λ do the following set vectors form a linearly dependent set
in R3 ?
(xii) Show that if S = {v1 , v2 , ..., vr } is linearly dependent set of vectors, then so is
every nonempty subset of S.
(xiii) Show that if S = {v1 , v2 , ..., vr } is linearly independent set of vectors, then
S = {v1 , v2 , ..., vr , vr+1 , ..., vn } is also linearly dependent set.
(xiv) Prove: For any vectors u, v and w , the vectors u − v, v − w and w − u forms
a linearly dependent set.
(xv) By inspection, explain why the following set of vectors are not bases for the
indicated vector spaces?
(a) S = {(−1, 2, 4), (5, −10, −20), (1, 0, 2), (1, 2, 3)} forR3
(b) S = {(p1 = 1 − 2x + 4x2 , p2 = −3 + 6x − 12x2 } for P2
(c) S = {(3, −1), (4, 5), (−2, 9)} for R2
CHAPTER 1. VECTOR SPACES 46
{ ( ) ( )}
4 0 −4 0
(d) S = A= , B= for M22
−2 −2 2 2
e) S = {(1, 2, 3), (−8, 2, 4), (2, 4, 6)} for R3 .
(xviii) Show that the following set of vectors is a basis for M22 .
( ) ( ) ( ) ( )
3 6 0 −1 0 −8 1 0
A= , B= , C= D=
3 −6 −1 0 −12 −4 −1 2
(xx) Find the coordinate vector of w relative the basis B = {v1 , v2 } for R2 .
(xxi) Find the coordinate vector of w relative the basis B = {v1 , v2 , v3 } for R3 .
(xxii) Find the coordinate vector of p relative the basis B = {p1 , p2 , p3 } for P2 .
(a) p1 = 1, p2 = x, v3 = x2 ; p = 2 − 1x + 3x2
(b) p1 = 1 + x, p2 = 1 + x2 , p3 = x + x2 ; p = 2 − x + x2
(xxiii) Find the coordinate vector of A relative the basis B = {A1 , A2 , A3 , A4 } for
M22 where
( ) ( ) ( )
2 0 −1 1 1 1
A= , A1 = , A2 = ,
−1 3 0 0 0 0
( ) ( )
0 0 0 0
A3 = , A4 = .
1 0 0 1
(xxiv) If B = {v1 , v2 , v3 } is a basis for vector space V then show that
B ′ = {v1 , v1 + v2 , v1 + v2 + v3 } is also a basis of V .
(xxviii) Find t, for which u = (eat , aeat ), v = (ebt , bebt ) form a linearly independent set
in R2 .
CHAPTER 1. VECTOR SPACES 48
w = w + 0̄ for w ∈ W1 , 0̄ ∈ W2
w = 0̄ + w for w ∈ W2 , 0̄ ∈ W1
Since, these expressions are unique, w = 0̄ and consequently
W1 ∩ W2 = {0̄}.
v = w1 + w2 = w1′ + w2′
Corollary 1.1. If the vector space is the direct sum of its subspaces W1 , W2 , ..., Wn
i.e. V = W1 ⊕ W2 ⊕ ... ⊕ Wn then dim.V = dim.W1 + dim.W2 + ... + dim.Wn .
Exercise: 1.4
CHAPTER 1. VECTOR SPACES 50
1. Let V = Mn (R) be real vector space of all n × n matrices. Then show that,
(a) W1 = {A + At /A ∈ V } is subspace of V .
(b) W2 = {A − At /A ∈ V } is subspace of V .
(c) V = W1 ⊕ W2 .
2. Let V = P100 be real vector space of all polynomials of degree less equal 100.
Show that,
3. Let V = P100 be real vector space of all polynomials of degree less equal 100.
Show that,
N ull(A) or N ullity(A).
Theorem 1.17. If a matrix is in reduced row echelon form, then the column vectors
that contains leading 1′s form a basis for column space of the matrix.
CHAPTER 1. VECTOR SPACES 53
Example 1.59. Find basis for the subspace of R3 spanned by the vectors (1, 2, −1),
(4, 1, 3), (5, 3, 2) and (2, 0, 2).
Solution: The subspace spanned by the vectors is the row space of the matrix
1 2 −1
4 1 3
A= 5 3 2
2 0 2
We shall reduce the matrix A to its row echelon form by elementary row transforma-
tions. After applying successive row transformations, we obtain the following row
echelon form of matrix A.
1 2 −1
0 1 −1
R= 0 0 0
0 0 0
∴ Basis for row space of A = basis for space generated by given vectors
= {(1, 2, −1), (0, 1, −1)}.
0 0 1
form a basis for column space of R.
Thus the corresponding vectors in A viz.
1 2 1
c1 = 2 , c2 = 1 , c4 = 1
1 1 1
form the basis for the column space of A.
Example 1.61. Determine basis for (a) range space and (b) null space of A given
by
1 2 3 1 5
A=2 1 3 1 4
1 1 2 1 3
Solution: (a) We have the basis for range space is the basis for column space of A.
From the above example, the vectors
1 2 1
c1 = 2 , c2 = 1 , c4 = 1
1 1 1
form the basis for the range space of A.
∴ rank(A) = 3.
(b) The null space of A is the solution space of the homogeneous system AX = 0̄.
x 1 0
1 2 3 1 5
x2 0
2 1 3 1 4 x3 = 0
1 1 2 1 3 x4 0
x5 0
CHAPTER 1. VECTOR SPACES 55
To obtain the general solution of above system, we reduce the coefficient matrix A
to reduced echelon form and the reduced form is as follows
1 0 1 0 1
R = 0 1 1 0 2 .
0 0 0 1 0
Therefore, the reduced system of equations is
x1 + x3 + x5 = 0
x2 + x3 + 2x5 = 0
x4 = 0
x1 = −s − t
x2 = −s − 2t
x3 = s
x4 = 0
x5 = t
CHAPTER 1. VECTOR SPACES 56
In matrix form
x1 −s − t
x2 −s − 2t
x3 = s
x4 0
x5 t
−1 −1
−1 −2
= s
1 + t
0
0 0
0 1
Hence, the vectors (−1, −1, 1, 0, 0) and (−1, −2, 0, 0, 1) form the basis for null space
of A.
∴ N ullity(A) = 2.
Example 1.62. Find basis and dimension of row space of matrix A given by
1 2 −1 2
A=3 0 1 4
1 −1 1 1
Solution: We shall reduce matrix A to its row echelon form. Consider
1 2 −1 2
A=3 0 1 4
1 −1 1 1
Applying successive row transformations, we get
1 2 −1 2
0 1 −2 1
3 3
0 0 0 0
CHAPTER 1. VECTOR SPACES 57
∴ rank(A) = 2.
Theorem 1.19. The rank of a matrix Am×n is r if and only if Dim{N ull(A)} is
n − r.
Proof. Let v be in Rp .
Then Bv = 0̄ implies ABv = 0̄.
Hence, N (B) ⊆ N (AB).
∴ dim N (B) ≤ N (AB).
∴ N ulity (B) ≤ N ullity(AB).
By dimension theorem, we have
Similarly,
rank(B ′ A′ ) ≤ rank(A′ )
rank(AB)′ ≤ rank(A′ )
rank(AB) ≤ rank(B) (1.13)
Corollary 1.2. If Pm×m and Bn×n are two nonsingular matrices then
where A is m × n matrix.
Proof. We observe that n − rank(A) is dim N (A) that is the number of independent
vectors v satisfying Av = 0̄.
CHAPTER 1. VECTOR SPACES 59
∴ rank(P ) ≤ m.
m = rank(P Q) ≤ rank(P ) = m.
Which is contradiction.
∴ rank(P ) = m.
Similarly,
rank(Q) = m.
rank(P ) = rank(Q) = m is proved.
Exersise: 1.5
(i) Determine the basis and dimension of range space and null space of A, where
A is given by
1 2 1 1
(a) 2 −3 7 9
1 4 −2 −1
CHAPTER 1. VECTOR SPACES 60
2 3 1 1
1 −1 3 −1
(b)
1 0 2 −1
3 5 1 2
(ii) Find basis for row space of A, where A is given in above example.
(iii) Find the rank and nullity of the matrix and verify the values obtained satisfy
formula for the dimension theorem.
1 −1 3
(a) 5 −4 −4
7 −6 2
1 4 5 2
(b) 2 1 3 0 .
−1 3 2 2
(iv) Find basis for subspace W of an Euclidean space R4 spanned by the set
(v) Find basis for subspace W of a vector space of polynomials P3 spanned by the
set
{x3 + x2 − 1, x3 + 2x2 + 3x, 2x3 + 3x2 + 3x − 1}.
Chapter 2
2.1 Introduction
In previous chapter we have studied abstract vector spaces. These are a generali-
sation of the geometric space R2 and R3 . But these have more structure than just
that of a vector space. In R2 and R3 we have the concepts of lengths and angles.
In those spaces we use the dot product for this purpose, but the dot product only
makes sense when we have components. In the absence of components we introduce
something called an inner product to play the role of the dot product. In this chap-
ter we study the additional structures that a vector space over field of real vector
spaces have.
u.v = u1 v1 + u2 v2 + · · · + vn vn .
61
CHAPTER 2. INNER PRODUCT SPACES 62
With the dot product we have geometric concepts such as the length of a vector,
the angle between two vectors, orthogonality, etc. We shall push these concepts to
abstract vector spaces so that geometric concepts can be applied to describe abstract
vectors.
4. Positivity axiom : For any u ∈ V , < u, u >≥ 0 and < u, u >= 0 if and only if
u = 0̄.
A real vector space V with real inner product is called real inner product space and
it is denoted by (V, < >).
Theorem 2.1. If u, v and w are vectors in a real inner product space, and k is any
scalar, then
Proof. :
1. Consider
Consider
2.
3.
4.
5.
Illustrations
The vector space Rn with this special inner product (dot product) is called the Eu-
clidean n-space, and the dot product is called the standard inner product on Rn .
Example 2.3. Show that for the vectors u = (u1 , u2 ) and v = (v1 , v2 ) in R2 ,
< u + v, w > = 5(u1 + v1 )w1 − (u1 + v1 )w2 − (u2 + v2 )w1 + 10(u2 + v2 )w2
= 5u1 w1 + 5v1 w1 − u1 w2 − v1 w2 − u2 w1 − v2 w1 + 10u2 w2 + 10v2 w2
= (5u1 w1 − u1 w2 − u2 w1 + 10u2 w2 ) + (5v1 w1 − v1 w2 − v2 w1 + 10v2 w2 )
=< u, w > + < v, w >
CHAPTER 2. INNER PRODUCT SPACES 65
Example 2.4. Show that for the vectors u = (u1 , u2 ) and v = (v1 , v2 ) in R2 ,
Example 2.6. If A = [aij ] and B = [bij ] are vectors (matrices) in Mm×n (R) (vector
space of m × n matrices ) , then we define the inner product on Mm×n (R) as
∑
< A, B >= aij .bij (Verify).
Remark 2.1. (i) The identity matrix In generates Euclidean inner product on Rn .
Example 2.9. Let C[a, b] be the vector space of real valued continuous functions
defined on [a, b]. For f, g ∈ C[a, b], show that
∫ b
< f, g >= f (x)g(x)dx
a
defines an inner product on C[a, b].
Example 2.10. Find inner product on R2 if < e1 , e1 >= 2, < e2 , e2 >= 3, and
< e1 , e2 >= −1, where e1 = (1, 0) and e2 = (0, 1).
u = u1 e1 + u2 e2
v = v1 e1 + v2 e2
First three axioms are obvious. It remains to check positivity axiom for u ∈ R2 .
Now,
Note: Let Pn [a, b] be a real vector space of all polynomials of degree less equal to
n, then Pn [a, b] ⊂ C[a, b] as any polynomial is continuous function.
Therefore, for p(x), q(x) ∈ Pn [a, b]
∫ b
< p(x), q(x) >= p(x)q(x)dx
a
Exercise: 2.1
1. Let < u, v > be the Euclidean inner product on R2 , and u = (3, −2), v = (4, 5),
w = (−1, 6) and k = −4. Verify the following:
d(u, v) = ∥u − v∥.
Theorem 2.2. The norm in an inner product space V satisfies the following prop-
erties for any u, v ∈ V and c ∈ R:
Proof.
Proof.
(d) ∥ 2w − v ∥
(e) ∥ u − 2v + 4w ∥
Solution:
(a)
(b)
< 2u − w, 3u + 2w > = (2)(3) < u, u > +(2)(2) < u, w > +(−1)(3) < w, u >
+ (−1)(2) < w, w >
= 6 ∥ u ∥2 +4(5) − 3(5) − 2 ∥ w ∥2
= 6(12 ) + 5 − 2(72 )
= 87.
CHAPTER 2. INNER PRODUCT SPACES 75
(c)
< u − v − 2w, 4u + v > = 4 < u, u > +(−1) < v, u > +(−2) < w, u >
+ < u, v > +(−1) < v, v > +(−2) < w, v >
= 4 ∥ u ∥2 +(−1)(2) − 2(5) + 2− ∥ v ∥2 +(−2)(−3)
= 4(12 ) − 2 − 10 + 2 − (22 ) + 6
= −4.
(d)
∥ 2w − v ∥2 =< 2w − v, 2w − v >
= (2)(2) < w, w > +(2)(−1) < w, v > +
(−1)(2) < v, w > +(−1)(−1) < v, v >
= 4 ∥ w ∥2 −4 < v, w > + ∥ v ∥2
= 4(72 ) + 4(−3) + 22
= 196 − 12 + 4 = 188
√
∴∥ 2w − v ∥ = 188.
(e)
∥ u − 2v + 4w ∥2 =< u − 2v + 4w, u − 2v + 4w >
=∥ u ∥2 +(−4) < u, v > +8 < u, w > +4 ∥ v ∥2 +
(−16) < v, w > +16 ∥ w ∥2
= 12 − 4(2) + 8(5) + 4(22 ) − 16(−3) + 16(72 )
= 881
√
∴∥ u − 2v + 4w ∥ = 881.
From this we hope that the angle θ between two nonzero vectors u, v in general
inner product space V will be given by,
< u, v >
cosθ = (2.3)
∥u∥∥v∥
Formula (2.3) is well defined only when
< u, v >
≤ 1 as |cosθ| ≤ 1.
∥u∥∥v∥
For this we prove Cauchy-Schwarz inequality in following theorem.
Theorem 2.5. Cauchy-Schwarz inequality
If u, v are vectors in real inner product space V then
< u, v >2 ≤ < u, u >< v, v > .
Equivalently | < u, v > | ≤ ∥u∥∥v∥. (2.4)
Proof. Case i) If u or v = 0 then both sides of (2.4) are zeros and result hold.
Case ii) Assume that u ̸= 0. Let a =< u, v >, b = 2 < u, v >, c =< v, v > and let t
be any real number. Then by positivity axiom,
0 ≤ < tu + v, tu + v > = < u, u > t2 + 2 < u, v > t+ < v, v >
= at2 + bt + c = f (t)
f ′ (t) = 2at + b, f ′′ (t) = 2a = 2 < u, u >> 0
b
f ′ (t) = 0 ⇒t = − is critical point of f (t).
2a
By second derivative test f (t) is minimum at t = − 2a
b
.
Therefore,
b −b2 + 4ac
0 ≤ f (− ) =
2a 4a
∴ b − 4ac ≤ 0.
2
∥u + v∥ ≤ ∥u∥ + ∥v∥.
∥u + v∥ ≤ ∥u∥ + ∥v∥.
< u, v >= 0.
By definition,
∑
n
Proof. To prove S is linearly independent we have to show that if ki vi = 0 then
i=1
each scalar ki = 0.
∑n
Assume that ki vi = 0. Then for each vj ∈ S we have,
i=1
⟨∑
n ⟩
ki vi , vj =< 0, vj >= 0
i=1
∑
n
ki < vi , vj > + kj < vj , vj >= 0 (2.8)
i=1,i̸=j
Solution: We have
From this, B = {p, q, r} is orthogonal set and hence linearly independent set for
inner product space P2 but not orthonormal set.
Moreover, n(B) = dimP2 = 3.
Therefore, B = {p, q, r} is orthogonal basis but not orthonormal basis for inner
product space P2 .
CHAPTER 2. INNER PRODUCT SPACES 80
Example 2.13. Let u = (cost, sint), v = (−sint, cost) in R2 . Show that the set of
vectors B = {u, v} is orthonormal basis for Euclidean inner product space R2 for
any real t.
Solution: We have
From this, B = {u, v} is orthonormal set and hence linearly independent set in
Euclidean inner product space R2 for any real t.
Moreover, n(B) = dimR2 = 2.
Therefore, B = {u, v} is orthonormal basis in Euclidean inner product space R2 for
any real t.
Example 2.14. Let u = (cost, sint, 0), v = (−sint, cost, 0), w = (0, 0, 1) in R3 .
Show that the set of vectors B = {u, v, w} is orthonormal basis for Euclidean inner
product space R3 for any real t.
Solution: We have
From this, B = {u, v, w} is orthonormal set and hence linearly independent set in
Euclidean inner product space R3 for any real t.
Moreover, n(B) = dimR3 = 3.
Therefore, B = {u, v, w} is orthonormal basis in Euclidean inner product space R3
for any real t.
Definition 2.7. Orthogonal Projection
Let u and v be two vectors in n dimensional vector space V . Then,the vector αu
CHAPTER 2. INNER PRODUCT SPACES 81
Example 2.15. Resolve the vector v = (1, 2, 1) into two perpendicular compo-
nents,one along u = (2, 1, 2).
Solution: Let u = (2, 1, 2) , v = (1, 2, 1).
The orthogonal projection of the vector v along u is,
< v, u >
P roju v = u
< u, u >
6
= (2, 1, 2)
9
2
= (2, 1, 2).
3
Step 3: Let
< v3 , u 1 > < v3 , u2 >
u3 = v3 − u1 − u2 .
∥u1 ∥ 2 ∥u2 ∥2
In this way suppose u1 , u2 , ..., ur are find out then,
Step 4: Let
∑ r
< vr+1 , ui >
ur+1 = vr+1 − ui , r = 1, 2, ....n − 1.
i=1
∥u i ∥2
Example 2.16. Let R3 have the Euclidean inner product. Use Gram-Schmidt pro-
cess to convert basis B = {u1 , u2 , u3 } where u1 = (1, 1, 1), u2 = (0, 1, 1), u3 = (0, 0, 1)
into an orthonormal basis.
( )
1 1
v3 = 0, − ,
2 2
√ ( )2 ( )2 √
1 1 1
∥v3 ∥ = 02 + − + =
2 2 2
( ) ( )
The constructed vectors v1 = (1, 1, 1), v2 = − 23 , 31 , 13 , v3 = 0, − 21 , 12 forms the
orthogonal basis vectors of R3 .
Now normalizing v1 , v2 , v3 we get,
( )
v1 1 1 1
w1 = = √ ,√ ,√
∥v1 ∥ 3 3 3
v2 1
w2 = = √ (−2, 1, 1)
∥v2 ∥ 6
v3 1
w3 = = √ (0, −1, 1)
∥v3 ∥ 2
The normalized vectors w1 , w2 , w3 forms the orthonormal basis vectors of R3 .
Example 2.17. Let R3 have the Euclidean inner product. Use Gram-Schmidt pro-
cess to convert basis B = {u1 , u2 , u3 } where u1 = (1, 0, 1), u2 = (−1, 1, 0), u3 =
(−3, 2, 0) into an orthonormal basis.
Solution: By Gram-Schmidt process,
√ √
Step 1: v1 = u1 = (1, 0, 1), ∥v1 ∥ = 12 + 02 + 12 = 2
Step 2:
< u 2 , v1 >
v2 = u2 − v1
∥v1 ∥2
(−1)
= (−1, 1, 0) − (1, 0, 1)
2
(1)
= (−1, 2, 1)
2
Step 3 :
< u 3 , v1 > < u3 , v 2 >
v3 = u3 − v 1 − v2
∥v1 ∥2 ∥v2 ∥2
−3 7/2 1
= (−3, 2, 0) − (1, 0, 1) − (−1, 2, 1)
2 3/2 2
1
= (−1, −1, 1).
3
CHAPTER 2. INNER PRODUCT SPACES 84
(1)
The constructed vectors v1 = (1, 0, 1), v2 = 2 (−1, 2, 1), v3 = 13 (−1, −1, 1) forms
the orthogonal basis vectors of R3 .
Now normalizing v1 , v2 , v3 we get,
v1 1
w1 = = √ (1, 0, 1)
∥v1 ∥ 2
v2 1
w2 = = √ (−1, 2, 1)
∥v2 ∥ 6
v3 1
w3 = = √ (−1, −1, 1).
∥v3 ∥ 3
The normalized vectors w1 , w2 , w3 forms the orthonormal basis vectors of R3 .
Exercise: 2.2
1. Let R2 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (1, −3), u2 = (2, 2) into an orthonormal basis.
2. Let R3 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (1, 1, 1), u2 = (−1, 1, 0), u3 = (1, 2, 1) into an
orthonormal basis.
3. Let R4 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (0, 2, 1, 0), u2 = (1, −1, 0, 0), u3 = (1, 2, 0, −1), u4 =
(1, 0, 0, 1) into an orthonormal basis.
4. Let R3 have the Euclidean inner product. Use the Gram-Schmidt process to
transform basis vectors u1 = (1, 1, 1), u2 = (−1, 1, 0), u3 = (1, 2, 1) into an
orthonormal basis.
5. Verify that the vectors v1 = (1, −1, 2, 1), v2 = (−2, 2, 3, 2), v3 = (1, 2, 0, −1), v4 =
(1, 0, 0, 1) form an orthogonal basis for Euclidean space R4 .
6. Verify that the vectors v1 = (−3/5, 4/5, 0), v2 = (4/5, 3/5, 0), v3 = (0, 0, 1) form
an orthogonal basis for R3 .
7. Let the P2 have inner product
∫ 1
p(x)q(x)dx
−1
LINEAR TRANSFORMATION
3.1 Introduction
In this chapter we shall study functions from an arbitrary vector space to another
arbitrary vector space and its various properties. The aim of such study is to show
how a linear transformation (mapping or function) can be represented by a matrix.
The matrix of a linear transformation is uniquely determined for a particular basis.
For different choices of the bases the same linear transformation can be represented
by different matrices.
L1 : T (u + v) = T (u) + T (v),
L2 : T (ku) = kT (u).
Proof. : Suppose T is the linear transformation then for vectors u1 and u2 in V and
86
CHAPTER 3. LINEAR TRANSFORMATION 87
T (ku) = kT (u).
∴ L2 is satisfied.
Therefore T is the linear transformation.
T (x1 , x2 ) = (x1 − x2 , x1 + x2 ).
Consider
T (u + v) = T (x1 + y1 , x2 + y2 )
= (x1 + y1 − x2 − y2 , x1 + y1 + x2 + y2 )
= (x1 − x2 , x1 + x2 ) + (y1 − y2 , y1 + y2 )
= T (x1 , x2 ) + T (y1 , y2 )
= T (u) + T (v)
T (ku) = T [k(x1 , x2 )]
= T [(kx1 , kx2 )]
= (kx1 − kx2 , kx1 + kx2 )
= [k(x1 − x2 ), k(x1 + x2 )]
= k(x1 − x2 , x1 + x2 )
= kT (u)
∴ T is a linear transformation.
T (x1 , x2 ) = (x21 , x1 + x2 ).
T (u + v) = T (x1 + y1 , x2 + y2 )
= [(x1 + y1 )2 , x1 + y1 + x2 + y2 ]
= [x21 + y12 + 2x1 y1 , x1 + y1 + x2 + y2 ] (3.2)
and
∴ T (u + v) ̸= T (u) + T (v)
T (u + v) = T (x1 + x2 , y1 + y2 )
= [2(x1 + x2 ), x1 + x2 + y1 + y2 , x1 + x2 − y1 − y2 ]
= (2x1 , x1 + y1 , x1 − y1 ) + (2x2 , x2 + y2 , x2 − y2 )
= T (u) + T (v)
T (ku) = T [k(x1 , y1 )]
= T (kx1 , ky1 )
= (2kx1 , kx1 + ky1 , kx1 − ky1 )
= k(2x1 , x1 + y1 , x1 − y1 )
= kT (x1 , y1 )
= kT (u)
∴ T is a linear transformation.
T (u) = ku,
T (u + v) = k(u + v)
= ku + kv
= T (u) + T (v)
T (αu) = k(αu)
= (kα)u
= (αk)u
= α(ku)
= αT (u)
∴ T is a linear transformation.
Example 3.8. Let V be a vector space of all functions defined on [0, 1] and W be
subspace of V consisting of all continuously differentiable functions on [0, 1]. Let
D : W −→ V be defined as
D(f ) = f ′ (x)
where f ′ (x) is derivative of f (x). Show that D is a linear transforation.
∴ D is a linear transformation.
CHAPTER 3. LINEAR TRANSFORMATION 92
∴ T is a linear transformation.
Example 3.10. Let V be a vector space of continuous functions defined on [0, 1]
and T : V −→ R defined as ∫ 1
T (f ) = f (x)dx
0
for f in V . Show that T is a linear transformation.
Solution: For any f , g in V .
∫ 1
T (f + g) = (f + g)(x)dx
∫
0
1
= (f (x) + g(x))dx
∫ 1
0
∫ 1
= f (x)dx + g(x)dx
0 0
= T (f ) + T (g)
CHAPTER 3. LINEAR TRANSFORMATION 93
∴ T is a linear transformation.
T (x, y) = (x2 , y)
is linear?
T (u + v) = T (x1 + x2 , y1 + y2 )
= [(x1 + x2 )2 , y1 + y2 ]
̸= (x21 , y1 ) + (x22 , y2 )
̸= T (u) + T (v)
Exercise: 3.1
1. Determine whether the following transformation T : R2 −→ R2 is linear,
if not, justify?
(a) T (x, y) = (y, y) (b) T (x, y) = (−y, x) (c) T (x, y) = (2x, y)
(d) T (x, y) = (x, y 2 ) (e) T (x, y) = (x, 0) (f) T (x, y) = (2x + y, x − y)
√ √
(g) T (x, y) = (x + 1, y) (h) T (x, y) = ( 3 x, 3 y).
(i) T (x, y) = (x + 2y, 3x − y).
2. Determine whether the following transformation T : R3 −→ R2 is linear,
if not, justify?
(a) T (0) = 0
Proof. : We have,
T (0) = T (0.u)
= 0.T (u) (T is L.T.)
=0
Proof. : We have,
T (−u) = T ((−1)u)
= (−1)T (u) (T is L.T.)
= −T (u)
Proof. : We have,
T (u − v) = T (u + (−1)v)
= T (u) + T ((−1)v) (T is L.T.)
= T (u) + (−1)T (v) (T isL.T.)
= T (u) − T (v)
(f ) T ( m
n u) =
m
n T (u), m and n are integers and n ̸= 0.
T (ui ) = wi , i = 1, 2, · · · , n.
CHAPTER 3. LINEAR TRANSFORMATION 97
∴ L(B) = V.
u = α1 u1 + α2 u2 + · · · + αn un (3.4)
T (u) = α1 w1 + α2 w2 + · · · + αn wn . (3.5)
u = α1 u1 + α2 u2 + · · · + αn un
v = β1 u1 + β2 u2 + · · · + βn un
u + v = (α1 + β1 )u1 + (α2 + β2 )u2 + · · · + (αn + βn )un
By definition of T , we have
ku = k(α1 u1 + α2 u2 + · · · + αn un )
∴ T is linear transformation.
Since,
ui = 0.u1 + 0.u2 + · · · + 1.ui + · · · + 0.un
T (ui ) = 0.w1 + 0.w2 + · · · + 1.wi + · · · + 0.wn
= wi
∴ T is linear transformation such that
T (ui ) = wi .
Claim(II): Uniqueness of linear transformation
Suppose there are two linear transformation T and T ′ such that,
T (ui ) = wi
and
T ′ (ui ) = wi , i = 1, 2, · · · , n.
Then for u in V ,
T (u) = α1 w1 + α2 w2 + · · · + αn wn
= α1 T ′ (u1 ) + α2 T ′ (u2 ) + · · · + αn T ′ (un )
= T ′ (α1 u1 ) + T ′ (α2 u2 ) + · · · + T ′ (αn un )
= T ′ (α1 u1 + α2 u2 + · · · + αn un )
= T ′ (u)
T (u) = T ′ (u)
T = T′
Hence, proof is completed.
Example 3.12. Let u1 = (1, 1, −1), u2 = (4, 1, 1), u3 = (1, −1, 2) be the basis vec-
tors of R3 and Let T : R3 −→ R2 be the linear transforation such that T (u1 ) =
(1, 0), T (u2 ) = (0, 1), T (u3 ) = (1, 1). Find T .
Solution: Since {u1 , u2 , u3 } be the basis of R3 .
For, (a, b, c) in R3 , there exists scalars α1 , α2 , α3 such that
(a, b, c) = α1 u1 + α2 u2 + α3 u3
= α1 (1, 1, −1) + α2 (4, 1, 1) + α3 (1, −1, 2)
= (α1 + 4α2 + α3 , α1 + α2 − α3 , −α1 + α2 + 2α3
CHAPTER 3. LINEAR TRANSFORMATION 99
Hence,
α1 + 4α2 + α3 = a
α1 + α2 − α3 = b
−α1 + α2 + 2α3 = c
α1 = 3a − 7b − 5c
α2 = −a + 3b + 2c
α3 = 2a − 5b − 3c
T (a, b, c) = T (α1 u1 + α2 u2 + α3 u3 )
= α1 T (u1 ) + α2 T (u2 ) + α3 T (u3 )
= (3a − 7b − 5c)(1, 0) + (−a + 3b + 2c)(0, 1) + (2a − 5b − 3c)(1, 1)
= (5a − 12b − 8c, a − 2b − c).
Solution: We have
Solution: We have,
Solution: We have
(1, 4) = a(1, 1) + b(1, −1).
Solving, we get
5 3
a= , b=−
2 2
5 3
(1, 4) = (1, 1) − (1, −1)
2[ 2 ]
5 3
T (1, 4) = T (1, 1) − (1, −1)
2 2
CHAPTER 3. LINEAR TRANSFORMATION 101
5 3
T (1, 4) = T (1, 1) − T (1, −1)
2 2
5 3
= (0, 2) − (2, 0)
2 2
= (−3, 5)
Similarly,
(−2, 1) = a(1, 1) + b(1, −1)
Solving, we get
1 3
(−2, 1) = − (1, 1) − (1, −1)
2 2
1 3
T (−2, 1) = − T (1, 1) − T (1, −1)
2 2
1 3
= − (0, 2) − (2, 0)
2 2
= (−3, −1).
Exercise: 3.2
(1) Consider the basis S = {v1 , v2 , v3 } for R3 , where v1 = (1, 1, 1), v2 = (1, 1, 0),
v3 = (1, 0, 0) and let T : R3 −→ R2 be the linear transformation such that
T (v1 ) = (1, 0), T (v2 ) = (2, −1), T (v3 ) = (4, 3). Find a formula T (x1 , x2 , x3 ) and
use it to compute T (2, −3, 5).
(2) Consider the basis S = {v1 , v2 } for R2 , where v1 = (1, 1), v2 = (1, 0) and let
T : R2 −→ R2 be the linear operator such that T (v1 ) = (1, −2), T (v2 ) =
(−4, 1). Find a formula T (x1 , x2 ) and use it to compute T (5, −3).
(3) Consider the basis S = {v1 , v2 } for R2 , where v1 = (−2, 1), v2 = (1, 3) and
let T : R2 −→ R3 be the linear transformation such that T (v1 ) = (−1, 2, 0),
T (v2 ) = (0, −3, 5). Find a formula T (x1 , x2 ) and use it to compute T (2, −3).
(4) Consider the basis S = {v1 , v2 , v3 } for R3 , where v1 = (1, 1, 1), v2 = (1, 1, 0),
v3 = (1, 0, 0) and let T : R3 −→ R3 be linear operator such that T (v1 ) =
(2, −1, 4), T (v2 ) = (3, 0, 1), T (v3 ) = (−1, 5, 1). Find a formula T (x1 , x2 , x3 ) and
use it to compute T (2, 4, −1).
CHAPTER 3. LINEAR TRANSFORMATION 102
(5) Consider the basis S = {v1 , v2 , v3 } for R3 , where v1 = (1, 2, 1), v2 = (2, 9, 0),
v3 = (3, 3, 4) and let T : R3 −→ R2 be linear transformation such that T (v1 ) =
(1, 0), T (v2 ) = (−1, 1), T (v3 ) = (0, 1). Find a formula T (x1 , x2 , x3 ) and use it
to compute T (7, 13, 7).
(e) Letv1 , v2 , v3 be vectors in a vector space V and T : V −→ R3 is a linear trans-
formation for which T (v1 ) = (1, −1, 2), T (v2 ) = (0, 3, 2), T (v3 ) = (−3, 1, 2).
Find T (2v1 − 3v2 + 4v3 ).
For u, v in ker(T ),
T (u) = 0̄, T (v) = 0̄
then
T (u + v) = T (u) + T (v)
= 0̄ + 0̄
= 0̄
∴ u + v ∈ ker(T ).
T (ku) = kT (u)
= k.0̄
= 0̄
∴ ku ∈ ker(T ).
Since, u1 , u2 in V , u1 + u2 is in V .
∴ T (u1 + u2 ) ∈ R(T ).
CHAPTER 3. LINEAR TRANSFORMATION 104
Consider
∴ T (ku1 ) ∈ R(T ).
Consider
T (ku1 ) = kT (u1 )
= kw1
T (ku1 ) = kw1
∴ kw1 ∈ R(T ).
∴ The range of T is subspace of V.
T (u) = 0̄
= T (0̄)
T (u) = T (0̄)
u = 0̄ (T is injective)
0̄ ∈ ker(T )
ker(T ) = 0̄.
For u, v ∈ V , consider
T (u) = T (v)
T (u) − T (v) = 0̄
T (u − v) = 0̄ (T is linear transformation)
u − v ∈ ker(T )
u − v = 0̄ (ker(T ) = 0̄)
u = v.
∴ T : V −→ W is injective.
∴ S spans R(T )
u = α1 u1 + α2 u2 + · · · + αr ur (3.8)
α1 = α2 = · · · = αr = β1 = β2 = · · · = βk = 0
β1 = β2 = · · · = βk = 0
∴ S is Linearly independent.
∴ S is a basis for R(T ).
∴ dimR(T ) = rank(T ) = k.
∴ rank(T ) + nullity(T ) = n.
Hence, proof is completed.
Proof. : Step(I)
Given: T : V −→ W is nonsingular linear transformation and the set
{u1 , u2 , · · · , un } is a basis of V.
∴ T is bijective.
∴ T is injective.
∴ T is injective and {u1 , u2 , · · · , un } is linearly independent in V.
∴ {T (u1 ), T (u2 ), · · · , T (un )} is linearly independent in W.
Also, T is surjective.
∴ For w ∈ W , there exist u ∈ V such that
w = T (u).
u = α1 u1 + α2 u2 + · · · + αn un
v = β1 u1 + β2 u2 + · · · + βn un .
Consider,
T (u) = T (v)
T (u) − T (v) = 0̄
T (u − v) = 0̄
T [(α1 u1 + α2 u2 + · · · + αn un ) − (β1 u1 + β2 u2 + · · · + βn un )] = 0̄
T [(α1 − β1 )u1 + (α2 − β2 )u2 + · · · + (αn − βn )un ] = 0̄
(α1 − β1 )T (u1 ) + (α2 − β2 )T (u2 ) + · · · + (αn − βn )T (un ) = 0̄
α1 − β1 = 0, α2 − β2 = 0, · · · , αn − βn = 0
α1 = β1 , α2 = β2 , · · · , αn = βn .
∴ u=v
T : V −→ W is injective.
Since, the set {T (u1 ), T (u2 ), · · · , T (un )} is a basis of W then for any w ∈ W , we
have
w = α1 T (u1 ) + · · · + αn T (un )
w = T (α1 u1 + · · · + αn un )
w = T (u).
Therefore, for any w ∈ W there is u ∈ V such that w = T (u).
∴ T : V −→ W is surjective.
∴ T : V −→ W is bijective.
∴ T : V −→ W is nonsingular.
Hence, proof is completed.
CHAPTER 3. LINEAR TRANSFORMATION 110
This system is consistent (verify). Therefore, for each element (a, b, c) ∈ R3 there is
u = (x1 , x2 , x3 , x4 ) ∈ R4 such that T (u) = (a, b, c).
Therefore, given vectors (0, 0, 6), (1, 3, 0) and (2, 4, 1) are in R(T ) = R3 .
CHAPTER 3. LINEAR TRANSFORMATION 111
Exercise: 3.3
2 3 5 1 8
(8) In each part find nullity of T
(a) T : R5 −→ R7 has rank 3.
(b) T : P4 −→ P3 has rank 1.
(c) The range of T : R6 −→ R3 is R3
(d) T : M22 −→ M22 has rank 3.
Exercise:3.4
(1) Find domain and codomain of T2 ◦ T1 and find (T2 ◦ T1 )(x, y) if
(a) T1 (x, y) = (2x, 3y), T2 (x, y) = (x − y, x + y).
(b) T1 (x, y) = (x − 3y, 0), T2 (x, y) = (4x − 5y, 3x − 6y).
(c) T1 (x, y) = (2x, −3y, x + y), T2 (x, y, z) = (x − y, y + z).
(d) T1 (x, y, z) = (x − y, y + z, x − z), T2 (x, y, z) = (0, x + y + z).
CHAPTER 3. LINEAR TRANSFORMATION 114
∴ T : V −→ W is injective.
Now, for w ∈ W, T −1 (w) = u is in V , We have
(T ◦ T −1 )(w) = w
T [T −1 (w)] = w
T (u) = w.
∴ T : V −→ W is surjective.
∴ T : V −→ W is bijective.
Step(II) Given: T : V −→ W is bijective.
Claim: To prove a linear transformation T : V −→ W has an inverse.
Since, T : V −→ W is surjective, then for every element w ∈ W , there is an element
u ∈ V such that T (u) = w.
Also, T : V −→ W is injective.
∴ u ∈ V is uniquely determined by w ∈ W .
This correspondence of w in W to u in V can be defined by the transformation
T1 (w) = u.
∴ (T1 ◦ T )(u) = T1 (T (u)) = u.
(T ◦ T1 )(w) = T (T1 (w)) = w.
∴ T1 is an inverse of T.
Therefore, proof of the theorem is completed.
∴ T (X) = AX.
([ ]) [ ][ ]
x 1 1 x
T =
y 1 0 y
CHAPTER 3. LINEAR TRANSFORMATION 116
([ ]) [ ]
x x+y
T = .
y x
∴ T (x, y) = (x + y, x).
To show T is injective i.e. to show ker.(T ) = {0̄} Consider
T (x, y) = T (0, 0)
T (x, y) = (0, 0)
(x + y, x) = (0, 0)
∴ x + y = 0, x = 0
x = 0, y = 0
∴ (x, y) = (0, 0) ⇒ ker.(T ) = {0̄}.
∴ T is injective.
Let (x1 , x2 ) be any vector in R2 , we must find (x, y) ∈ R2 such that
T (x, y) = (x1 , x2 ).
∴ (x + y, x) = (x1 , x2 ).
x + y = x1 and x = x2 .
∴ x = x2 and y = x1 − x2
∴ T is surjective and hence bijective.
∴ T has an inverse.
([ ]) [ ]
−1 x 1 x2
∴ T = .
x2 x1 − x2
Exercise: 3.5
(1) Let T : R2 −→ (R
2
be multiplication by A. Determine whether T has an inverse;
[ ])
x1
if so find T −1 , where
x2
[ ] [ ] [ ]
5 2 6 −3 4 7
(a) A = (b) A = (c) A = .
2 1 4 −2 −1 3
CHAPTER 3. LINEAR TRANSFORMATION 117
(2) Let T : R3 −→ R 3
be multiplication by A.Determinewhether T has an inverse;
( x ) 1 5 2 1 4 −1
1
if so find T −1 x2 , where (a) A = 1 2 1 (b) A = 1 2 1
x3 −1 1 0 −1 1 0
1 0 1 1 −1 1
(c) A = 0 1 1 (d) A = 0 2 −1 .
1 1 0 2 3 0
R3 respectively, where u1 = (3, 1), u2 = (5, 2), v1 = (1, 0, −1), v2 = (−1, 2, 2) and
v3 = (0, 1, 2).
Solution: From the formula of T , we have T (u1 ) = (1, −2, −5) and
T (u2 ) = (2, 1, −3).
On solving, we get
α = 1, β = 0 and γ = −2.
∴ T (u1 ) = (1, −2, −5) = 1v1 + 0v2 + (−2)v3
On solving, we get
α′ = 3, β ′ = 1 and γ ′ = −1.
∴ T (u2 ) = (2, 1, −3) = 3v1 + 1v2 + (−1)v3 .
CHAPTER 3. LINEAR TRANSFORMATION 119
Find the matrix A of T with respect to bases of B1 = {(1, 1, 1), (1, 1, 0), (1, 0, 0)}
and B2 = {(1, 3), (1, 4)} of R3 and R2 respectively.
Solution: Let (a, b) ∈ R2 be such that
c = 4a − b and d = b − 3a
∴ (a, b) = (4a − b)(1, 3) + (b − 3a)(1, 4)
Now,
Similarly,
From this,the matrix of T with respect to the standard bases B1 and B2 is given by
( )
B2 3 11 5
[T ]B1 = A = .
−1 −8 −3
Example 3.24. Let V = M2×2 (R) be the vector space of all 2 × 2 matrices(with real
)
1 1
numbers. Let T be the operator on V defined by T (X) = AX,where A = .
1 1
Find the matrix of T with respect to the basis
{ ( ) ( ) ( ) ( )}
1 0 0 1 0 0 0 0
B = A1 = , A2 = , A3 = , A4 = for V.
0 1 0 0 1 0 0 1
CHAPTER 3. LINEAR TRANSFORMATION 121
5
0 2 2
[T ]B
B = 2 1 3 .
0 −1 −3
Theorem 3.11. Let T : V → W be a linear transformation between any two arbi-
trary vector spaces V and W with bases B1 = {v1 , v2 , · · · , vn } and B2 = {w1 , w2 , · · · , wm }
respectively. Let (v)B1 = (α1 , α2 , · · · , αn ) be coordinate vector of a vector v ∈ V and
(w)B2 = (β1 , β2 , · · · , βm ) be coordinate vector of a vector w ∈ W and A is a matrix
representation of T , all respect to the given bases, then
v = α1 v1 + α2 v2 + · · · + αn vn
w = β1 w1 + β2 w2 + · · · + βm wm
CHAPTER 3. LINEAR TRANSFORMATION 123
Suppose,
Now,
w = T (v)
⇔ β1 w1 + β2 w2 · · · + βm wm = T (α1 v1 + α2 v2 + · · · + αn vn )
= α1 T (v1 ) + α2 T (v2 ) + · · · + αn T (vn )
= α1 (a11 w1 + a21 w2 + ... + am1 wm )
+ α2 (a12 w1 + a22 w2 + ... + am2 wm )
+ · · · + αn (a1n w1 + a2n w2 + ... + amn wm )
= (a11 α1 + a12 α2 + · · · + a1n αn )w1
+ (a21 α1 + a22 α2 + · · · + a2n αn )w2
+ · · · + (am1 α1 + am2 α2 + · · · + amn αn )wm
⇔ A(v)B1 = (w)B2 .
Hence, proof is completed.
Now,
T is invertible ⇔ T is one-one and onto.
T is one-one ⇔ ker.T = 0̄.
⇔ T (v) = 0̄ = T (0̄) ⇔ v = 0̄
⇔ A(v)B1 = 0̄ ⇒ (v)B1 = 0̄ (by (3.11))
⇔ N ull(A) = 0̄
⇔ A is one-one.
Again, T is onto
⇔ For any w ∈ W , there exists v ∈ V such that T (v) = w.
⇔ by (3.11), for any (w)B2 ∈ Rn there exists (v)B1 ∈ Rn such that
A(v)B1 = (w)B2 .
∴ A is onto.
From this, we proved that T is one-one and onto ⇔ A is one-one and onto.
This proves, T is invertible ⇔ A is invertible.
CHAPTER 3. LINEAR TRANSFORMATION 125
3.8.1 Matrix of the sum of two linear transformations and a scalar mul-
tiple of a linear transformation
Now, if
α1 P1 + α2 P2 + · · · + αn Pn = 0̄.
Then from equation (3.13), we get
α1 w1 + α2 w2 + · · · + αn wn = 0̄.
⇒ α1 = α2 = · · · = αn = 0
as B2 = {w1 , w2 , · · · , wn } is basis and hence w1 , w2 , · · · , wn are linearly independent.
Thus,
α1 P1 + α2 P2 + · · · + αn Pn = 0̄
that is the matrix equation
α1
α2
P .. = 0̄
.
αn
has only trivial solution
α1 = α2 = · · · = αn = 0.
∴ P is invertible matrix.
∴ P is a nonsingular matrix.
Hence, proof is completed.
Theorem 3.13. Let B1 = {v1 , v2 , · · · , vn } and B2 = {w1 , w2 , · · · , wn } are two bases
of n− dimensional vector spaces V such that
and
∑
n
u = β1 w1 + β2 w2 + · · · + βn wn = βi wi . (3.18)
i=1
CHAPTER 3. LINEAR TRANSFORMATION 128
∴ (u)B1 = P (u)B2
∴ (u)B2 = P −1 (u)B1 (Since P is invertible).
Therefore, proof is completed.
Theorem 3.14. Let V and W be vector spaces such that dim.V = n, dim.W = m
and B1 = {v1 , v2 , · · · , vn } and B2 = {w1 , w2 , · · · , wm } be the bases of V and W
respectively. Let A = [T ]B B1 be (m × n) matrix of linear transformation T : V → W
2
∑
m
ŵj = ckj wi (1 ≤ j ≤ m) (3.22)
k=1
Hence, from above equations, we get
∑
m
T (v̂j ) = âij ŵi (by (3.20))
i=1
∑m ∑
m
= âij ĉki wk (by (3.22))
i=1 k=1
∑ ∑
m ( m )
= cki âij wk
k=1 i=1
= (C Â)wk . (3.23)
Alternatively, as T is linear, then we have
(∑m )
T (v̂j ) = T bij vi (by (3.21))
i=1
∑
n
= bij T (vi )
i=1
∑n (∑
n )
= bij aik wk (by (3.19))
i=1 k=1
∑ ∑
n ( n )
= aki bij wk
k=1 i=1
= (AB)wk (3.24)
From equations (3.23) and (3.24), we get
C Â = BA
∴ Â = C −1 BA.
Hence, proof is completed.
CHAPTER 3. LINEAR TRANSFORMATION 130
Example 3.26. If two matrices A and B are similar, then show that A2 and B 2 are
similar. Moreover, if A and B are invertible, then A−1 and B −1 are also similar.
Solution: Since the matrices A and B are similar,by definition of similarity there
exist invertible matrix C such that
A = C −1 BC (1)
Obviously,
A2 = A.A
= (C −1 BC)(C −1 BC) (from (1))
= C −1 B(CC −1 )BC
= C −1 BIBC
= C −1 B 2 C
∴ A2 is similar to B 2 .
If A and B are invertible, then from (1), we have
A−1 = (C −1 BC)−1
= C −1 B −1 (C −1 )−1
(from reversal law of inverse of product of matrices)
= C −1 B −1 C
∴ A−1 is similar to B −1 .
CHAPTER 3. LINEAR TRANSFORMATION 131
Example 3.27. If two matrices A and B are similar such that ,at least one of them
is invertible, then show that AB and BA are also similar.
Solution: Suppose the matrix A is invertible,then by definition of invertible matrix,
we have
A−1 (AB)A = A−1 ABA
= IBA ∵ A−1 A = I
= BA
i.e. BA = A−1 (AB)A
∴ BA is similar to AB.
Similarly, we can show that AB is similar to BA, when B is invertible.
Thus,AB and BA are similar, if at least one of them is invertible.
Exercise :3.6
1. Let T : R3 → R2 be a linear transformation defined by
T (x, y, z) = (x + y + z, y + z).
Find the matrix of T with respect to the bases B = {(−1, 0, 2), (0, 1, 1), (3, −1, 0)}
and B ′ = {(−1, 1), (1, 0)} of R3 and R2 respectively.
Miscelleneous
(1) Find the standard matrix for the linear transformation T : R3 −→ R4 defined
by T (x, y, z) = (3x − 4y + z, x + y − z, y + z, x + 2y + 3z).
(2) Find the standard matrix for the linear transformation T : R3 −→ R3 defined
by T (x, y, z) = (5x − 3y + z, 2z + 4y, 5x + 3y).
(3) Find the standard matrix for the linear transformation T : R3 −→ R2 defined
by T (x, y, z) = (2x + y, 3y − z) and use it to find T (0, 1, −1).
(5) Find the standard matrix for the linear transformation T : R3 −→ R3 defined
by T (x, y, z) = (3x − 2y + z, 2x − 3y, y − 4z) and use it to find T (2, −1, −1).