Sei sulla pagina 1di 4

Lecture 2

Examples.

1. For all n1 n2 , Cn1 is a subspace of Cn2 (as C-vector spaces).

2. Given a vector space V over F, {~0} is a subspace.

3. In R2 , any subspace is either (a) R2 , (b) {~0} or (c) a line through the origin. Why?
If W is a subspace and contains some w 6= ~0, it must contain the entire line spanned
by w; that is, the set {cw : c R}. This is a line through the origin. If it contains
anything outside this line, we can use this new vector along with w to generate all of
R2 .

4. Generally in Rn , any subspace is a hyperplane through the origin.

Last time we saw that if W1 and W2 are subspaces of a vector space V then

W1 + W2 = {w1 + w2 : w1 W1 , w2 W2 }

is also a subspace. This is actually the smallest subspace containing both W1 and W2 . You
might think this would be W1 W2 , but generally, the union does not need to be a subspace.
Consider V = R2 over R and

W1 = {(x, 0) : x R}, W2 = {(0, y) : y R} .

Then both of these are subspaces but their union is not, since it is not closed under addition
((1, 1) = (1, 0) + (0, 1)
/ W1 W2 ).
In the case that W1 W2 = {~0}, we say that W1 + W2 is a direct sum and we write it
W1 W2 .

Spanning

Given a subset S (not necessarily a subspace) of a vector space V we want to generate the
smallest subspace containing S.

Definition 0.1. Let V be a vector space and S V . The span of S is defined

Span(S) = W CS W ,

where CS is the collection of subspaces of V containing S.

Note that the Span is the smallest subspace containing S in that if W is another subspace
containing S then Span(S) W . The fact that Span(S) is a subspace follows from:

Proposition 0.2. Let C be a collection of subspaces of a vector space V . Then W C W is


a subspace.
Proof. First each W inC contains ~0, so W C W is nonempty. If v, w W C W and c F
then v, w W for all W C. Since each W is a subspace, cv + w W for all W C,
meaning that cv + w W C W , completing the proof.

Examples.

1. Span() = {~0}.

2. If W is a subspace of V then Span(W ) = W .

3. Span(Span(S)) = Span(S).

4. If S T V then Span(S) Span(T ).

There is a different way to generate the span of a set. We can imagine that our initial
definition of span is from the outside in. That is, we are intersecting spaces outside of S.
The second will be from the inside out: it builds the span from within, using the elements
of S. To define it, we introduce some notation.

Definition 0.3. If S V then v V is said to be a linear combination of elements of


S if there are finitely many elements v1 , . . . , vn S and scalars a1 , . . . , an F such that
v = a1 v1 + + an vn .

Theorem 0.4. Let S V be nonempty. Then Span(S) is the set of all linear combinations
of elements of S.

Proof. Let S be the set of all linear combinations of elements of S. We first prove S
Span(S), so let a1 v1 + + an vn S. Each of the vi s is in S and therefore in Span(S).
By closure of Span(S) under addition and scalar multiplication, we find a1 v1 + + an vn
Span(S).
To show that Span(S) S, it suffices to show that S is a subspace of V ; then it is one of
the spaces we are intersecting to get Span(S) and we will be done. Because S 6= we can find
s S and then 1s is a linear combination of elements of S, making the Span nonempty. So
let v, w Span(S) and c F. We can write v = a1 v1 + + an vn and w = b1 w1 + + bk wk
for vi , wi S. Then

cv + w = (ca1 )v1 + + (can )vn + b1 w1 + + bk wk S .

Corollary 0.5. If W1 , W2 are subspaces of V then Span(W1 W2 ) = W1 + W2 .

Proof. Because ~0 W1 and in W2 , we have W1 + W2 (W1 W2 ). Therefore W1 + W2 is one


of the subspaces we intersect to get the span and Span(W1 W2 ) W1 + W2 . Conversely,
any element in W1 + W2 is in Span(W1 W2 ) as it is already a linear combination of elements
of W1 W2 .

2
Definition 0.6. A vector space V is finitely generated if there is a finite set S V such
that V = Span(S). Such an S is called a generating set.

The space Rn is finitely generated: we can choose

S = {(1, 0, . . . , 0), (0, 1, 0, . . . , 0), . . . , (0, . . . , 0, 1)} .

The space

R
c = {(x1 , x2 , . . .) : xi R, finitely many nonzero terms}

with coordinate-wise addition and scalar multiplication is not finitely generated.

Generating sets are closely linked to linear independence.

Linear independence

Definition 0.7. A set S V is called linearly dependent if there exists v S such that
v Span(S \ {v}). We decree that is linearly independent; that is, not linearly dependent.
The intuition is that a set is linearly dependent if there are unnecessary elements in it to
span Span(S). Indeed, we can restate this condition for S 6= as

S linearly dependent iff v S such that Span(S) = Span(S \ {v}) .

Exercise: prove this!


Examples.
1. {~0} is linearly dependent in any vector space.

2. In C2 , {(1, 0), (0, 1), (1, 1)} is linearly dependent, since (1, 1) Span({(1, 0), (0, 1)}).

3. In Cn ,
{(1, 0, . . . , 0), . . . , (0, . . . , 0, 1)}
is linearly independent. Indeed, suppose we remove any element from this set. For
simplicity let us take the first. Then every element in the span of the others must have
zero first-coordinate, and cannot be (1, 0, . . . , 0).
There is a very simple condition we can check to see if a set is linearly independent.
Proposition 0.8. Let V be a vector space and S V . Then S is linearly independent if
and only if whenever a1 , . . . , an F and v1 , . . . , vn S satisfy

a1 v1 + + an vn = ~0

we must have a1 = = an = 0.

3
Proof. If S = then S is linearly independent. Furthermore, it satisfies the condition of
the proposition vacuously: it is true because we cannot ever find a linear combination of
elements of S equal to ~0.
Otherwise suppose that S is linearly dependent but S 6= . Then we can find v S such
that v Span(S \ {v}). Therefore v is a linear combination of elements of S \ {v}: we can
find w1 , . . . , wn S \ {v} and scalars a1 , . . . , an such that v = a1 w1 + + an wn . Then

(a1 )w1 + + (an )wn + v = ~0 .

This is a linear combination of elements of S equal to ~0 with not all coefficients equal to 0,
proving that if the condition of the proposition holds, then S must be linearly independent.
Conversely if S is linearly independent suppose that

a1 v1 + + an vn = ~0

for some v1 , . . . , vn S and a1 , . . . , an F. If the coefficients are not all 0, we can find one,
say a1 which is nonzero. Then we solve:

v1 = (a1
1 ) [a2 v2 + + an vn ] ,

giving v1 Span(S \ {v1 }). (Note here that a1


1 is defined since a1 6= 0 and all nonzero field
elements are invertible.)

Corollary 0.9. Let S1 S2 V , an F-vector space.

1. If S1 is linearly dependent, so is S2 .

2. If S2 is linearly independent, so is S1 .

Proof. The first item follows from the second, so we prove the second. Suppose that S2 is
linearly independent and that v1 , . . . , vn S1 , a1 , . . . , an F such that

a1 v1 + + an vn = ~0 .

Since these vectors are also in S2 and S2 is linearly independent, a1 = = an = 0. Thus


S1 is linearly independent.

Potrebbero piacerti anche