Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
These rough notes are for the Vector Spaces part of CIE Further Mathematics Alevel, since
these topics are not covered in many books. I hope they are helpful. I have included proofs and
explanations for most things which can be understood by students, even if they are not on the
syllabus.
Dr. Zen Harper, CIE centre at Shanghai Experimental School
Email:
zen.harper@cantab.net
(u + v) + w = u + (v + w),
u + 0 = u.
0u = 0,
(u + v) = u + v,
( + )u = u + u,
(u) = ()u.
With these properties, we can prove that u is UNIQUE, and (1)u = u. We write u v
instead of u + (v).
T HE MOST OBVIOUS EXAMPLE : Rn , the space of all column vectors of height n.
OTHER EXAMPLES : see Example 17 below. The set of all (realvalued) functions y satd2 y
dy
isfying 2
6y = 0 is a vector space. T HIS IS ONE OF THE REASONS WHY VECTOR
dx
dx
SPACES ARE SO IMPORTANT ! This is a subspace of the vector space of all continuous functions
f : R R.
(In this case, the space has dimension 2, spanned by the functions e3x , e2x ; see later for the
definition of dimension).
Definition 2 Let V be a vector space and W V . We say that W is a vector subspace of V if
W is itself a vector space. We only need to check that a, b W = a + b W , a W
for every R.
x
x
Example 3 In R , consider the sets A =
: x = 13y , B =
: 14x + 3y = 0 ,
y
y
x
x
x
2
2
C=
:x+y =1 ,D =
:x +y =1 ,E =
: x, y Z .
y
y
y
u1
v
2
Then A, B are vector subspaces of R , e.g. let u =
, v = 1 A. Then u1 = 13u2 ,
u2
v2
u1 + v1
v1 = 13v2 , so u1 + v1 = 13(u2 + v2 ); thus u + v =
A. Also u1 = 13(u2 ), so
u2 + v2
u A.
2
x
y R3 :
2x y + 3z = 0,
x + y + 4z = 0,
x 4y + 2z = 0
is a vector subspace of R3 .
Every vector subspace of R3 is either: {0} (a point), a line through 0, a plane through 0, or R3
itself.
Definition 4 Let u1 , u2 , u3 , . . . , V . A linear combination of these vectors is
some N and numbers 1 , 2 , 3 , . . . , N R. Their span (or linear span) is
PN
j=1
j uj , for
1
17
x
2
2
Example 5 Let a =
,b =
in R . Then span {a, b} = R , because
=
1
0
y
1
(y)a + 17
(x + y) b for every x, y R.
1
3
x
Let c =
, d =
in R2 . Then span{c, d} =
R2 : y = 2x =
2
6
y
span{c} = span{d}.
Definition 6 u1 , u2 , u3 , . . . , uN V are called linearly independent if:
N
X
j uj = 0
j = 0 for all j.
j=1
For example: every plane in R3 passing through (0, 0, 0) is the span of two linearly independent vectors.
1
0
0
1
0
2
Proof: 2a + 3b = c, so 2a + 3b + (1)c = 0.
Also, any u1 , u2 , u3 , . . . , uN with uj = 0 for some j is not linearly independent.
P ROOF : 0u1 + 0u2 + + 1uj + 0uj+1 + + 0uN = 0, but 0, 0, . . . , 1, 0, . . . , 0 are not
all zero.
Exercise 9 If a, b, c R3 are orthogonal and nonzero, so that a b = a c = b c = 0 and
a, b, c 6= 0, then a, b, c are linearly independent.
x11
x12
x1n
x11 x12
x21 x22
x2n are linearly
x21 x22
x1n
x2n
.. 6= 0.
.
xnn
j uj
d2 y
dy
6y = 0.
2
dx
dx
Proof: From Differential Equations: let y1 = e3x , y2 = e2x . Then {y1 , y2 } is a basis of Y .
Fact 18 Let U, V be vector spaces with U V , and V finitedimensional. T HEN U is finite
dimensional with dim(U ) 6 dim(V ). Also
dim(U ) = dim(V ) U = V .
Theorem 19 Let u1 , u2 , . . . , um V be linearly independent. Then:
(i) u1 , u2 , . . . , um is a basis for span{u1 , u2 , . . . , um }.
(ii) If m = dim(V ) then u1 , u2 , . . . , um is a basis for V .
(iii) Every collection of vectors a, b, c, . . . V containing more than dim(V ) vectors is not
linearly independent.
Proof: (i) by definition. (ii) by Fact 18 (since the span is a subspace of V with the same
dimension). In (iii), the vectors a, b, c, . . . cannot be linearly independent (because if they
were, then their span would be a subspace of V with higher dimension, contradicting Fact 18).
Definition 20 For any matrix A, the column space is the linear span of the columns of A. Thus,
it is a vector space of column vectors. The row space
is the linear span of the rows of A, and
thus a vector space of row vectors x1 x2 xn .
3 1 0
1
0 is R3 , and the row space is
2
R = x y 0 z : x, y, z R .
The same set of vectors can have many different descriptions, but notice that the second description of the row space using x, y, z is simpler than the first description using , , .
Proof: (CHECK THESE CALCULATIONS !) The column space is the set of all vectors
3
1
0
1
3 +
0 + 1 + 0 + 0 =
0
0
0
2
2
x
3
NULLITY, LINEAR
TRANSFORMATIONS
T (a + b) = T (a) + T (b)
, R.
Obviously the null space and range space are vector subspaces of V and W , respectively.
Theorem 25 Let T : Rn Rm be a linear transformation represented by the matrix A. Then:
Range space of T = Column space of A
Proof: Obvious, since Aej is the j th column of A, with e1 , e2 , . . . , en the standard basis of
Rn . (I already said this in the proof of Theorem 23).
Fact (RankNullity Formula): dim(V ) = dim(Null space of T ) +
dim(Range space of T ).
Proof: (***) From undergraduate Linear Algebra, V = V /null(T ) null(T ), so
dim(V ) = dim V /null(T ) + dim null(T ) .
But T : V W induces anisomorphism Te : V /null(T ) T (V ), given by Te(v + null(T )) =
T v. Hence dim V /null(T ) = dim(T (V )).
Example 26 (Invertibility can never work for nonsquare matrices) Let A, B, C be matrices with AB = I (m m) and CA = I (n n). Then WE MUST HAVE n = m.
Proof: A, B, C must represent linear transformations A : Rn Rm and B, C : Rm Rn .
The range space of A is Rm , because u = Iu = (AB)u = A(Bu) for every u Rm .
The null space of A is {0}, because if Av = 0 then v = Iv = (CA)v = C(Av) = C0 = 0.
Thus
Au Av = 0 A(u v) = 0
u v is in the null space of A
u v = 0
u = v.
1 0
1
0
0 0, then AB =
, but A, B are not invertible (they couldnt possibly be, since
0 1
1 1
1 0 0
they are not square!) Notice that BA = 0 0 0 is not the identity matrix.
0 0 1
ROW OPERATIONS
Given any matrix, there are 3 kinds of row operations on the rows R1 , R2 , R3 , . . .
1. Swap Ri , Rj , for some i 6= j;
2. Replace Ri by Ri , for some 6= 0;
3. Replace Ri by Ri + Rj , where i 6= j.
1 2 1
e.g. A = 2 3 4 ,
1 0 1
2 3 4
B = 1 2 1 ,
1 0 1
A B: swap R1 , R2 .
A C: replace R2 by 3R2 .
1 2 1
C = 6 9 12 ,
1 0 1
1 2 1
D = 0 1 6 .
1 0 1
A D: replace R2 by R2 2R1 .
0 1 0
a1 a2 a3 a4
b1 b 2 b3 b4
1 0 0 b1 b2 b3 b4 = a1 a2 a3 a4 ,
0 0 1
1 0
a1 + b1 a2 + b2 a3 + b3 a4 + b4
a1 a2 a3 a4
0 1 0 b1 b2 b3 b4 = b1
b2
b3
b4 , . . .
0 0 1
Thus if M1 , M2 , M3 , . . . Mn are matrices representing our moves, then our game proceeds:
(AkI)
(M1 AkM1 )
(M2 M1 AkM2 M1 )
(Mn M2 M1 AkMn M2 M1 ) = (M AkM ),
1 1 0
3 0 .
Example 29 Find A1 , where A = 1
3 1 1
1
1 1 0
1
0
0
1
1
0
1
3 0
0 1 0 0
4 0
1
3 1 1
0 0 1
3 1 1
0
7
0 0
1 1 0
1
0
0
1 0 0
1 0
1/4 1/4 0
0 1
3 1 1
0
0 1
1 0
0 1
3 1
1 0 0
3/4
0 1 0
1/4
0 0 1
10/4
3/4 1/4 0
1 0 0
1/4 1/4 0 0 1 0
0
0 1
3 0 1
1/4 0
3/4
1/4 0 , so finally A1 = 1/4
2/4 1
5/2
0
0
1
3/4
1/4 0
1/4 1/4 0
1/4 1/4 1
3 1 0
1/4 0
1/4 0 = 14 1 1 0.
10 2 4
1/2 1
1 0 2 1
0 11 3 1
0 2 17
B1 = 0 3 0 0 , B2 = 0 0 0 7 , B3 =
,
0 0 0
0 0 0 4
0 0 0 0
0 1
B4 = 0 0
0 0
0 0 1
4 0 0
1 1 1 0
0 2 0 , 0 0 1 , 0 0 1 1 .
0 0 0
0 0 2
0 1 0 0
Fact 31 If A is any matrix, we can use row operations to convert it to another matrix B in row
echelon form. B is called a row echelon form reduction of A. We always have
Rank of A = The number of rows of B which are not all zero.
Explanation (***) It is not hard to show that row operations do not change the rank (because
the matrix representing any row operation is invertible). Although, note that the range space
DOES change; it is only its dimension which is fixed. (Exercise: if V is a vector subspace of W
and T : W W is an invertible linear transformation, then dim(V ) = dim(T (V )).)
Thus Rank(A) = Rank(B). Now, the formula for Rank(B) becomes obvious if you think
long enough about how B acts on vectors, and use the Row Echelon Form property: its similar
to the proof that upper triangular matrices with nonzero diagonal entries are invertible by
back substitution starting from the lowest variables.
(Sorry this is just a messy bit of computation; I cant think of a really nice proof right now.
I think its possible to do something with induction and removing unnecessary columns, but it
still seems fairly messy).
Thus, the row echelon form is useful for calculating the rank of a matrix (and hence also the
nullity). In the example just above, B1 , B2 , B3 , B4 have ranks 3, 2, 1, 1, and nullities 1, 2, 2, 1
respectively (remember the RankNullity Formula above).
Proof: (***) Let bn1 , bn2 , . . . , bnk be the column vectors of columns n1 , n2 , . . . , nk of B,
and an1 , an2 , . . . , ank the corresponding column vectors of A. We already know that k is the
dimension of the range space (by Fact 31); so we only need to prove linear independence.
anj = Aenj and bnj = Benj for each j, where e1 , e2 , . . . are the standard basis vectors. Now
B = M A for some invertible M , so bnj = Benj = M Aenj = M (Aenj ) = M anj .
C LAIM : an1 , an2 , . . . , ank are linearly independent.
P
P
P
P
P ROOF : let j j anj = 0. Then 0 = M
a
= j j M anj = j j bnj . Therej j nj
fore 1 = 2 = = k = 0 because bn1 , bn2 , . . . , bnk are linearly independent (by our
choice). Thus an1 , an2 , . . . , ank are linearly independent.
Example 33 let us reduce a matrix to row echelon form:
1 1 2 3 4
1 1 2 3 4
1 1 2 3 4
1 2 0 3 2
0 1 2 0 6 0 1 2 0 6 = B,
A=
0
0 0 0 0 0
0 0 0 1 2
0 0 0 0
0
0 0 1 2
0 0 0 1 2
0 0 0 0 0
by the row operations New R2 = (Old R2 ) + R1 , followed by Swap(R3 , R4 ). Now B has 3
nonzero rows.
B are linearly independent; therefore, a basis for the range
Columns
1,
3,4 of
1
2
3
1 0 3
space of A is
0 , 0, 0 . Other choices are possible: columns 1, 2, 4, or columns
0
0
1
1, 2, 5, or columns 1, 3, 5 would also work (since they are obviously linearly independent.
N OTE : columns 2, 4, 5 or columns 3, 4, 5 are also linearly independent, so would also work;
but this is not obvious.
R EMARK : Finding linearly independent columns of a reduced row echelon matrix is very
easy; just look at the lowest nonzero entries in each column. Its similar to showing that upper
triangular matrices with nonzero diagonal entries are always invertible by direct solution of linear equations. Try lots of examples and it becomes obvious! Its really a kind of algorithmic
understanding.
Theorem 34 Every basis for the null space of B is also a basis for the null space of A.
Proof: B = M A, so Au = 0 = Bu = M Au = M (0) = 0. Therefore (null space of A)
(null space of B). But A = M 1 B, so similarly (null space of B) (null space of A). Thus
A, B have the same null space. So any basis for one null space will also be a basis for the other.
This is useful because it is always easy to find a basis for the null space of a matrix in row
echelon form.
Example:
of A from Example 33 above. We know the nullity is 5 3 = 2.
find
the null
space
1
0
0
1
, v = (since it is obvious that these are linearly independent); now choose
Let u =
the other numbers so that u, v lie in the null space of B (the reduced form of A):
1
1
+
2x
+
3y
+
4z
0
2x + 6z
,
0 = Bu = B
x =
y + 2z
y
0
z
so that 2x + 3y + 4z + 1 = 0, 2x + 6z = 0, y + 2z = 0. This is easy to solve: we get y = 2z,
x = 3z, (6 6 + 4)z = 1, so that z = 1/8, x = 3/8, y = 1/4.
1
0
0
0 1
1
1
1
1
, 1/4
3/8
Similarly, if 0 = B p, then p = 4 , q = 2 , r = 4 . Hence, the pair
1/4 1/2
q
1/8
1/4
r
8
0
0 4
is a basis for the null space of A. We can clear the fractions to get
3, 1 .
2 2
1
1
E IGENVECTORS , EIGENVALUES ,
DIAGONALISATION
WE MUST HAVE
u 6= 0.
10
1
20(3)1
11