Sei sulla pagina 1di 18

Chapter 5: General Vector Spaces Section 5.

1: Real vector spaces Denition: A nonempty set V , on which addition and scalar multiplication have been dened, which satises all ten axioms below is called a vector space and the objects in V are called vectors. For all u, v, w V , k, m scalars, the ten axioms are 1. If u, v V , then u + v V 2. u + v = v + u 3. u + (v + w) = (u + v) + w 4. For all u V , there exists a zero vector 0 such that 0 + u = u + 0 = u 5. For each u V , there exists a negative of u, denoted u (also in V ) such that u + (u) = (u) + u = 0 6. If k is a scalar, u V , then ku V 7. k(u + v) = ku + kv 8. (k + m)u = ku + mu 9. k(mu) = (km)u 10. 1u = u Example:

Theorem 5.1.1: Let V be a vector space, u V , k a scalar, then 1. 0u = 0 2. k 0 = 0 3. (1)u = u 4. If ku = 0 then k = 0 or u = 0

Section 5.2: Subspaces Denition: A subset W of a vector space V is called a subspace of V is W is itself a vector space under addition and scalar multiplication as dened on V . Note: Many of the axioms will automatically hold on W since they hold on V . Theorem 5.2.1: If W is a set of one or more vectors from a vector space V , then W is a subspace of V if and only if the following hold 1. If u, v W , then u + v W (closed under addition) 2. If u W , k a scalar, then ku W (closed under scalar multiplication) Note: If W satises the conditions in Theorem 5.2.1, then W must also contain the zero vector of V (k = 0), and the inverse (k = 1). Example:

Theorem 5.2.2: If Ax = 0 is a homogeneous linear system of m equations in n unknowns, then the set of solution vectors is a subspace of Rn . Proof:

Denition: A vector w is a linear combination of the vectors v1 , v2 , . . . , vr if there are scalars ki such that w = k1 v1 + k2 v2 + + kr vr Example:

Theorem 5.2.3: If v1 , v2 , . . . , vr are vectors in V then 1. The set W of all linear combinations of v1 , v2 , . . . , vr is a subspace of V 2. W is the smallest subspace of V that contains v1 , v2 , . . . , vr Denition: If S = {v1 , v2 , . . . , vr }, vi V , then the subspace W of V consisting of all linear combinations of the vectors in S is called the space spanned by v1 , v2 , . . . , vr , and the vectors v1 , v2 , . . . , vr span W . We write W = span(S) or W = span{v1 , v2 , . . . , vr } Example:

Theorem 5.2.4: If S = {v1 , v2 , . . . , vr }, S = {w1 , w2 , . . . , wr } are two sets of vectors in the vector space V , then span{v1 , v2 , . . . , vr } = span{w1 , w2 , . . . , wr } if and only if each vector in S is a linear combination of vector S , and each vector in S is a linear combination of vectors in S.

Section 5.3: Linear independence Denition: If S = {v1 , v2 , . . . , vr } is a nonempty set of vectors, then S is a linearly independent set if the vector equation k1 v1 + k2 v2 + + kr vr = 0 has only the trivial solution k1 = 0, k2 = 0, . . . kr = 0. If there are some nonzero ki such that k1 v1 + k2 v2 + + kr vr = 0, then S is a linearly dependent set. Example:

Theorem 5.3.1: A set S with two or more vectors is 1. Linearly dependent if and only if at least one vector in S is expressible as a linear combination of the other vectors in S 2. Linearly independent if and only if no vector in S is expressible as a linear combination of the other vectors in S Example:

Theorem 5.3.2: 1. A nite set of vectors that contains the zero vector is linearly dependent 2. A set with exactly two vectors is linearly independent if and only if neither vector is a scalar multiple of the other Example:

Theorem 5.3.3: Let S = {v1 , v2 , . . . , vr } be a nonempty set of vectors in Rn . If r > n, then S is linear dependent. Note: The case r > n is equivalent to having a system of equations with more unknowns than equations. Example:

Section 5.4: Basis and dimension Denition: If V is a vector space and S = {v1 , v2 , . . . , vn } is a set of vectors in V , then S is a basis for V if the following holds 1. S is linearly independent 2. S spans V Example:

Note: 1. We can think of a basis as a generalization of the familiar R2 or R3 coordinate system 2. We can often nd many dierent (and equally valid) bases for a vector space V

Theorem 5.4.1: Uniqueness of basis representation If S = {v1 , v2 , . . . , vn } is a basis for the vector space V then every vector v in V can be expressed in the form v = c1 v1 +c2 v2 + +cn vn in exactly one way. Denition: If v = c1 v1 +c2 v2 + +cn vn is the expression of v in terms of the basis S, then the scalars ci are the coordinates of v relative to the basis S and the coordinate vector of v relative to the basis S is denoted (v)S = (c1 , c2 , . . . , cn ) Note: The coordinate vector entries depend on the order of the vectors in the basis S. Example:

Denition: A nonzero vector space V is nite dimensional if it has a basis containing a nite number of vectors. Otherwise, V is innite dimensional. We dene the zero vector space to be nite dimensional.

Theorem 5.4.2: Let V be a nite dimensional vector space and {v1 , v2 , . . . , vn } is any basis of V 1. If a set has more than n vectors, then it is linearly dependent 2. If a set has fewer than n vectors, then it does not span V Note: This means a basis is the largest possible linearly independent set that spans V (any larger and S spans V but is not linearly independent, any smaller and S is linearly independent but doesnt span V ) Theorem 5.4.3: All bases for a nite dimensional vector space have the same number of vectors. Denition: The dimension of a nite-dimensional vector space V , denoted dim(V ), is dened as the number of vectors in a basis for V . We dene the zero vector as having dimension zero. Note: The two theorems and denition above tell us that if we know the dimension of our vector space, we know how many basis vectors we must have. Conversely, if we know how many vectors are in a basis, we know the dimension of our vector space. Example:

10

Theorem 5.4.4: Plus/Minus Theorem Let S be a nonempty set of vectors in a vector space V 1. If S is a linearly independent set and v is outside span(S), then the set S {v} is still linearly independent 2. If v S can be expressed as a linear combination of other vectors in S then span(S) = span(S {v}) Theorem 5.4.5: If V is an n-dimensional vector space, and if S is a set in V with exactly n vectors, then S is a basis for V if either S spans V or S is linearly independent. Note: We know how many vectors are in the basis of V since we know its dimension. So given a set S containing the right number of vectors, we only need to show S spans V or the vectors in S are linearly independent. Example:

11

Theorem 5.4.6: Let S be a set of vectors in a nite dimensional vector space V 1. If S spans V but is not a basis for V , then S can be reduced to a basis for V by removing appropriate vectors from S. 2. If S is a linearly independent set that is not a basis for V , then S can be enlarged to a basis for V by adding appropriate vectors into S. Example:

Theorem 5.4.7: If W is a subspace of a nite dimensional space V , then dim(W ) dim(V ) and if dim(W ) = dim(V ) then W = V . Example:

12

Section 5.5: Row space, column space and null space Denition: For an m n matrix a11 a12 a1n a21 a22 a2n A= . . . . . . am1 am2 amn the row vectors of A are r1 = r2 = . . . rm = a11 a12 a1n a21 a22 a2n am1 am2 amn

and the column vectors of A are a12 a11 a22 a21 c 1 = . , c2 = . . . . . am2 am1

, . . . , cn =

a1n a2n . . . amn

Note: Since A is an m n matrix, there are m row vectors and ri Rn . There are n column vectors and ci Rm . Denition: If A is an m n matrix 1. The row space of A is the subspace of Rn spanned by the row vectors of A 2. The column space of A is the subspace of Rm spanned by the column vectors of A 3. The null space of A is the subspace of Rn consisting of all solutions to the homogeneous system of equations Ax = 0 Theorem 5.5.1: A system of linear equations Ax = b is consistent if and only if b is in the column space of A. Example: 13

Denition: For a consistent system of linear equations Ax = b, if the vector x0 is a solution to the system, then x0 is called a particular solution of Ax = b. If S = {v1 , . . . , vk } is a basis for the null space of A, then the linear combination c1 v1 + + ck vk is called a general solution of Ax = 0. The linear combination x0 +c1 v1 + +ck vk is called a general solution of Ax = b. Theorem 5.5.2: If x0 denotes a solution of the consistent linear system Ax = b and if v1 , . . . , vk form a basis for the null space of A, then every solution of Ax = b can be expressed in the form x = x0 + c1 v1 + + ck vk ci scalars. Conversely, for all choices of ci , x is a solution to Ax = b. That is, the general solution of Ax = b is the sum of any particular solution of Ax = b and the general solution of Ax = 0. Note: In geometric terms, this means the solution space of Ax = b is parallel to the null space of A. Example:

14

Theorem 5.5.3-4: Elementary row operations do not change the null space of a matrix. Elementary row operations do not change the row space of a matrix. Elementary row operations CAN change the column space of a matrix. Theorem 5.5.5: If A and B are row equivalent matrices, then 1. A given set of column vectors in A is linearly independent if and only if the corresponding column vectors of B are linearly independent 2. A given set of column vectors of A forms a basis for the column space of A if and only if the corresponding column vectors of B form a basis for the column space of B Theorem 5.5.6: If a matrix R is in row echelon form, then the row vectors with the leading ones form a basis for the row space of R, and the column vectors with the leading ones of the row vectors form a basis for the column space of R. Note: For a linear system of equations Ax = b, these theorems give us a method for nding bases of the row, column and null space! Example:

15

Section 5.6: Rank and nullity Denition: For an m n matrix A, the fundamental matrix spaces of A are 1. The row space of A (which is equal to the column space of AT ) 2. The column space of A (which is equal to the row space of AT ) 3. The null space of A 4. The null space of AT Theorem 5.6.1: If A is any matrix, then the row space and column space of A have the same dimension. Denition: For an m n matrix A, the rank of A, denoted rank(A), is the dimension of the row space and column space of A. The nullity of A, denoted nullity(A), is the dimension of the null space of A. Theorem 5.6.2: If A is any matrix, then rank(A) = rank(AT ) Theorem 5.6.3: Dimension theorem for matrices If A is a matrix with n columns, then rank(A) + nullity(A) = n Theorem 5.6.4: For an m n matrix A, then 1. rank(A) = number of leading variables in the solution of Ax = 0 2. nullity(A) = number of parameters (free variables) in the general solution of Ax = 0 Note: We can say the following about the dimension of the four fundamental matrix spaces of A 1. The row space of A has dimension r, where r min(m, n) 2. The column space of A has dimension r, where r min(m, n) 3. The null space of A has dimension n r 4. The null space of AT has dimension m r Example: 16

Theorem 5.6.5: The Consistency Theorem If Ax = b is a linear system of m equations in n unknowns, then the following are equivalent 1. Ax = b is consistent 2. b is in the column space of A 3. The coecient matrix A and the augmented matrix [A|b] have the same rank Theorem 5.6.6: If Ax = b is a linear system of m equations in n unknowns, then the following are equivalent 1. Ax = b is consistent for every m 1 matrix b 2. The column vectors of A span Rm 3. rank(A) = m Note: Theorem 5.6.5 is for a specic b while Theorem 5.6.6 is for any possible b. If Ax = b is an overdetermined system (m > n) then it cant be consistent for every possible b. Theorem 5.6.7: If Ax = b is a consistent linear system of m equations in n unknowns, and if A has rank r, then the general solution of the system contains n r parameters. Theorem 5.6.8: If A is an mn matrix, then the following are equivalent 1. Ax = 0 has only the trivial solution 2. The column vectors of A are linearly independent 3. Ax = b has at most one solution for every m 1 matrix b Example:

17

Theorem 5.6.9: If TA : Rn Rn is multiplication by an n n matrix A, then the following are equivalent 1. A is invertible 2. Ax = 0 has only the trivial solution 3. The reduced row echelon form of A is In 4. A can be expressed as a product of elementary matrices 5. Ax = b is consistent for every n 1 matrix b 6. Ax = b has exactly one solution for every n 1 matrix b 7. det(A) = 0 8. The range of TA is Rn 9. TA is one-to-one 10. The column vectors of A are linearly independent 11. The row vectors of A are linearly independent 12. The column vectors of A span Rn 13. The row vectors of A span Rn 14. The column vectors of A form a basis for Rn 15. The row vectors of A form a basis for Rn 16. A has rank n 17. A has nullity 0

18

Potrebbero piacerti anche