Sei sulla pagina 1di 35

Topic 4: General Vector Spaces

4.1 The vector space axioms


4.2 Examples of vector spaces
4.3 Complex vector spaces
4.4 Subspaces of general vector spaces
4.5 Spanning sets, linear independence and bases
4.6 Coordinate matrices

172

Vectors in Rn have some basic properties shared by many other


mathematical systems. For example,
u+v =v+u

for all u, v

is true for many other systems.


Key Idea:
Write down these basic properties and look for other systems which
share these properties. Any system that does share these properties will
be called a vector space.

173

4.1 The vector space axioms

[AR 4.1]

Lets start trying to write down the basic properties that we want
vectors to satisfy.
A vector space is a set V with two operations defined:
1. addition
2. scalar multiplication
We want these two operations to satisfy the kind of algebraic properties
that we are used to from vectors in Rn .
For example, we want our vector operations to satisfy
u+v =v+u

and

(u + v) = u + v

This leads to a list of ten properties (or axioms) that we will then take
as our definition.
174

The scalars are members of a number system F called a field in which


we have addition, subtraction, multiplication and division.
Usually the field will be F = R or C.
In this subject we will mainly concentrate on the case F = R.

175

Definition (Vector Space)


A vector space is a non-empty set V with two operations: addition and
scalar multiplication.
These operations are required to satisfy the following rules.
For any u, v, w V :
Addition behaves well:
A1 u + v V
(closure of vector addition)
A2

(u + v) + w = u + (v + w)

(associativity)

A3

u+v =v+u

(commutativity)

There must be a zero and inverses:


A4 There exists a vector 0 V such that
A5

v + 0 = v for all v V
For all v V , there exists a vector v
such that v + (v) = 0

(existence of zero vector)


(additive inverses)
176

Definition (Vector Space continued)


For all u, v, V and , F:
Scalar multiplication behaves well:
M1 v V

(closure of scalar multiplication)

M2

(v) = ()v

(associativity of scalar multip)

M3

1v = v

(multiplication by unit scalar)

Addition and scalar multiplication combine well :


D1 (u + v) = u + v
(distributivity 1)
D2

( + )v = v + v

(distributivity 2)

177

Remark
It follows from the axioms that for all v V :
1. 0v = 0
2. (1)v = v
Can you prove these?
We are going to list some systems that obey these rules.
We are not going to show that the axioms hold for these systems.
If, however, you would like to get a feel for how this is done, read AR4.1.

178

4.2 Examples of vector spaces

1. Rn is a vector space with scalars R


After all, this was what we based our definition on! Vector spaces with
R as the scalars are called real vector spaces

179

2. Vector space of matrices


Denote by Mmn ( or Mmn (R) Mm,n (R) or Mmn (R)) the set of all m n
matrices with real entries.
Mmn is a real vector space with the following familiar operations:


 
a11 a12
b
+ 11
a21 a22
b21

a
11
a21

 

b12
a11 + b11 a12 + b12
=
b22
a2 1 + b21 a22 + b22

 
a11 a12
a12
=
a21 a22
a22

What is the zero vector in this vector space?

Note: Matrix multiplication is not a part of this vector space structure.


180

3. Vector space of polynomials


For a fixed n, denote by Pn (or Pn (R)) the set of all polynomials
with degree at most n:
Pn = {a0 + a1 x + a2 x 2 + + an x n | a0 , a1 , . . . , an R}
If we define vector addition and scalar multiplication by:
(a0 + + an x n ) + (b0 + + bn x n ) = (a0 + b0 ) + + (an + bn )x n
(a0 + a1 x + + an x n ) = (a0 ) + (a1 )x + + (an )x n , R
Then Pn is a vector space.
181

4. Vector space of functions


Let S be a set.
Denote by F(S, R) the set of all functions from S to R.
Given f , g F(S, R) and R,
let f + g and f be the functions defined by the following:
(f + g )(x) = f (x) + g (x)
(f )(x) = f (x)
Equipped with these operations F(S, R) is a vector space.
Remark
The + on the left of the first equation is not the same as the + on
the right!
Why not?
182

Example
Let f , g F(R, R) be defined by
f : R R,

f (x) = sin x

and

g : R R,

g (x) = x 2

What do f + g and 3f mean?

183

What is the zero vector in F(R, R)?


0 : R R is the function defined by
0(x) = 0
That is, 0 is the function that maps all numbers to zero.

184

4.3 Complex vector spaces


A vector space that has C as the scalars is called a complex vector
space.
Example

with the operations :

C2 = {(a1 , a2 ) | a1 , a2 C}

(a1 , a2 ) + (b1 , b2 ) = (a1 + b1 , a2 + b2 )


(a1 , a2 ) = (a1 , a2 )

(where a1 , a2 , b1 , b2 , C)

is a complex vector space.


Remark
All of the above examples of real vector spaces: Rn , Pn (R), F(S, R)
have complex analogues: Cn , Pn (C), F(S, C)

185

Important observation
All the concepts we looked at for Rn
(such as subspaces, linear independence, spanning sets, bases)
carry over directly to general vector spaces.
Why consider general vector spaces?

186

4.4 Subspaces of general vector spaces


Definition (Subspace)
A subspace of a vector space V is a subset S V that is itself a vector
space (using the operations from V ).
This looks slightly different to the definition we had for subspaces of Rn .
The following theorem shows that, in fact, we get the same thing.

Theorem (Subspace Theorem, AR Thm 4.2.1)


Let V be a vector space.
A subset W V is a subspace of V if and only if
0. W is non-empty

1. W is closed under vector addition


2. W is closed under scalar multiplication
187

Note
It follows that a subspace W of V must necessarily contain the zero
vector 0 V .
Example
Let V = M2,2 the vector space of real 2 2 matrices and H V be
matrices with trace equal to 0, where trace is the sum of the diagonal
entries.
In other words



a b
H=
|a+d =0
c d
Show that H is a subspace of V .

188

Another example
Let
V = P2 = {a0 + a1 x + a2 x 2 | a0 , a1 , a2 R}
and
W = {a0 + a1 x + a2 x 2 | a1 a2 > 0} V
Is W a subspace of V ?

189

More examples
1. {0} is always a subspace of V
2. V is always a subspace of V

a 0 0
3. The set of diagonal matrices {0 b 0 | a, b, c R}
0 0 c
is a subspace of M3,3

190

4. The subset of continuous functions {f : R R | f is continuous}


is a subspace of F(R, R)

5. S = {2 2 matrices
with determinant equal to 0}

a b
={
| ad bc = 0} is not a subspace of M2,2
c d

6. {f : [0, 1] R | f (0) = 2} is not a subspace of F([0, 1], R)

191

4.5 Spanning sets, linear independence and bases


These concepts, which we have seen for subspaces of Rn ,
apply equally well in a general vector space.
Let V be a vector space with scalars F and S V a subset.

Definition
A linear combination of vectors v1 , v2 , . . . , vk S is a sum
1 v1 + + k vk
where each i is a scalar.

Definition
The set S is linearly dependent if there are vectors v1 , . . . , vk S and
scalars 1 , . . . , k at least one of which is non-zero, such that
1 v1 + + k vk = 0
A set which is not linearly dependent is called linearly independent.
192

Definition
The span of the set S is the set of all linear combinations of vectors
from S
Span(S) = {1 v1 + + k vk | v1 , . . . , vk S and 1 , . . . , k F}
S is called a spanning set for V if Span(S) = V
This is the same as saying that S V and every vector in V can be
written as a linear combination of vectors from S.
A basis for V is a set which is both linearly independent and a spanning
set for V .

193

Example
Are the following elements of M2,2 linearly independent?

 
 

1 3
2 1
1 3
,
,
0 1
0 1
0 4

194

Another example
In P2 the element p(x) = 2 + 2x + 5x 2 is a linear combination of
p1 (x) = 1 + x + x 2 and p2 (x) = x 2 , but q(x) = 1 + 2x + 3x 2 is not.

So {p1 , p2 } is not a spanning set for P2 , though it is linearly


independent.
195

As with Rn we have the following important results:

Theorem
Let V be a vector space.
1. Every spanning set for V contains a basis for V
2. Every linearly independent set in V can be extended to a basis of V
3. Any two bases of V have the same cardinality
(i.e., same number of elements)
The basic idea behind the proof is exactly as we saw with Rn . But there
are some very interesting technical differences! These mostly concern
the possibility that a basis might have infinitely many elements.

196

Definition
The dimension of V , denoted dim(V ), is the number of elements in a
basis of V . We call V finite dimensional if it admits a finite basis, and
infinite dimensional otherwise.
Examples
1. {1, x, x 2 , . . . , x n } is a basis for Pn . So dim(Pn (R)) = n + 1

 
 
 

1 0
0 1
0 0
0 0
2. {
,
,
,
} is a basis for M22 ,
0 0
0 0
1 0
0 1
so dim(M22 ) = 4.

 
 

1 0
0 1
0 0
3. {
,
,
} is a basis for the vector space S of
0 1
0 0
1 0
2 2 matrices with trace equal to zero. (See Slide 188.)
This is therefore a 3 dimensional subspace of M22 .
197

An infinite dimensional example


Let P be the set of all polynomials:
P = {a0 + a1 x + + an1 x n1 + an x n | n N, a0 , a1 , . . . , an R}
Then P is a vector space and the set B = {1, x, x 2 , . . .} is a basis for P.
So P is an infinite dimensional vector space.
Can you see why B is a basis?

198

In the case of a finite dimensional vector space, we have the following:

Theorem
Suppose V has dimension n, and S is a subset of V . Let |S| denote the
number of elements in S.
1. If |S| < n, then S does not span V .

2. If |S| > n, then S is linearly dependent.


The proof is exactly the same as in the case of Rn .

199

Examples
1. The polynomials
{2 + x + x 2 , 1 + x,

1 7x 2 , x x 2 }

are linearly dependent, since

2. The matrices
n 2 1
,
3 4


1 1
,
0 1


6 7 o
4 5

do not span M22 since

200

Standard Bases
It is useful to fix names for certain bases.

The standard basis for Rn is


{(1, 0, 0, . . . , 0), (0, 1, 0, . . . , 0), . . . , (0, 0, 0, . . . , 0, 1)}
The dimension of Rn is n.

The standard basis for Mm,n is

1 0 ... 0
0 1 ... 0

0 0 . . . 0
0 0 . . . 0
. .
, . .
.
..

.. ..

,
.. ..
..

0 0

... 0

0 ... 0

The dimension of Mm,n is m n.

...

0
0
,
...
0

0 ... 0

0 . . . 0
..
..
.
.

0 ... 1
201

The standard basis for Pn is




1, x, x 2 , . . . , x n

The dimension of Pn is n + 1.

202

4.6 Coordinate matrices


The notion of coordinates relative to a basis carries over from the case
of vectors in Rn .

Definition
Suppose that B = {v1 , . . . , vn } is an ordered basis for a vector space V .
For any v V we have
v = 1 v1 + + n vn

for some scalars

1 , . . . , n .

The scalars 1 , . . . , n are uniquely determined and are called the


coordinates of v relative to B.
The coordinate matrix of v with respect to B is again defined as

1
..
[v]B = .
n

203

The following Lemma shows that the coordinates of a vector with


respect to a basis are uniquely determined.

Lemma
If {v1 , . . . , vn } is a basis for a vector space V then every vector v V
can be written uniquely in the form
v = 1 v1 + . . . n vn
where 1 , . . . , n are scalars.
Proof:

204

Examples
1. In P2 with basis B = {1, x, x 2 } the polynomial

p = 2 + 7x 9x 2 has coordinates [p]B =

2. M22


 
 
 

1 0
0 1
0 0
0 0
with basis B = {
,
,
,
}
0 0
0 0
1 0
0 1




1 2

has coordinates [A]B =


The matrix A =

3 4

205

Coordinates can often be used to reduce problems about general vector


spaces to problems in Rn .
For example, let v1 , . . . , vk be vectors in a vector space V with basis B.
Then
v1 , . . . , vk are linearly dependent
if and only if
their coordinate matrices [v1 ]B , . . . , [vk ]B are linearly dependent.

206

Potrebbero piacerti anche