Sei sulla pagina 1di 2

Math 1b Row-equivalence; matrix inverses January 7, 2011 Recall that matrices A and B are row-equivalent when one can

n be obtained from the other by a sequence of elementary row operations. An elementary row operation on a matrix M gives us a matrix whose rows M whose rows are linear combinations of the rows of M . Since elementary row operations can be undone by other elementary row operations, the rows of M are linear combinations of the rows of M . It follows that if A and B are row-equivalent, then the rows of A are linear combinations of the rows of B , and the rows of B are linear combinations of the rows of A. (Later, we will say that A and B have the same row space.) To review one item from the handout on matrix multiplication, recall that given matrices A and B , the equation A = CB holds for some matrix C if and only if the rows of A are linear combinations of the rows of B . For example, a11 a21 means that (a11 , a12 , a13 ) = 2(b11 , b12 , b13 ) 3(b21 , b22 , b23 ) + 4(b31 , b32 , b33 ) + 5(b41 , b42 , b43 ) and (a21 , a22 , a23 ) = 6(b11 , b12 , b13 ) + 7(b21 , b22 , b23 ) 8(b31 , b32 , b33 ) + 9(b41 , b42 , b43 ). So if M and N are row-equivalent, there are square matrices S and T so that M = SN and N = T M . Inverses The way I use the terms, a square matrix A is nonsingular when its echelon form is the identity matrix I , and is invertible when there exists a matrix B of the same size so that AB = BA = I . Such a matrix is called the inverse of A and is denoted by A1 . Note that if A is invertible, then Ax = b is equivalent to x = A1 b. For example, we may check that 3 5 2 4
1

a12 a22

a13 a23

2 6

3 4 7 8

5 9

b11 b21 b31 b41

b12 b22 b32 b42

b13 b23 b33 b43

2 1

5/2 3/2

Theorem. A square matrix is invertible if and only if it is nonsingular.

We prove only part of the theorem here, by explaining how to nd A1 when A is row-equivalent to I . Suppose A is row-equivalent to I (both n n). Start with the n 2n matrix M= and use row operations or pivots to get to M = I B A I

for some matrix B . We claim that for this B , AB = BA = I . There are various ways to explain why this works. Here is a one. Since [A, I ] and [I, B ] are row-equivalent, there are square matrices S and T so that [A, I ] = S [I, B ] and [I, B ] = T [A, I ]. That is, [A, I ] = [SI, SB ] = [S, SB ] and [I, B ] = [T A, T I ] = [T A, T ]. The rst equation means A = S and I = SB , so AB = I . The second equation means I = T A and B = T , so BA = I . For another explanation via elementary matrices, see Section 2.4 of LADW. (We will not emphasize elementary matrices in this course.) Uniqueness of echelon form Theorem. Two matrices A and B are row-equivalent if and only if they have the same reduced echelon form. (I really shouldnt say the reduced echelon form, until I am SURE that only one matrix in reduced echelon form is row-equivalent to a given matrix.) Proof: If A and B have the same reduced echelon form E , then A is row-equivalent to E and E is row-equivalent to B . It follows that A is row-equivalent to B . Now suppose A and B are row equivalent. Let E1 be a reduced echelon form of A and E2 be a reduced echelon form of B . Then E1 and E2 are row-equivalent. We want to show E1 = E2 . A complete discussion requires more details than I want to type. But suppose we know that the special columns of both E1 and E2 occur right away, at the left of the matrices, and that neither has rows of all zeros. (This is a BIG assumption.) That is, suppose E1 = I F1 and E2 = I F2 .

Since E1 and E2 are row equivalent, E2 = CE1 for some matrix C . This means I = C I and F2 = C F1 . But then C = I and F2 = F1 .

Potrebbero piacerti anche