Sei sulla pagina 1di 93

201

CHAPTER
3
Linear
Algebra



3.1

Matrices: Sums and Products



Do They Compute?
1.
2 0 6
2 4 2 4
2 0 2


=



A 2.
1 6 3
2 2 3 2
1 0 3


+ =



A B

3. 2 C D, Matrices are not compatible

4.
1 3 3
2 7 2
1 3 1


=



AB 5.
5 3 9
2 1 2
1 0 1


=



BA 6.
3 1 0
8 1 2
9 2 6


=



CD

7.
1 1
6 7

=


DC 8. ( )
T
1 6
1 7

=


DC

9.
T
C D, Matrices are not compatible 10.
T
D C, Matrices are not compatible

11.
2
2 0 0
2 1 10
0 0 2


=



A 12. AD, Matrices are not compatible

13.
3
2 0 3
2 0 2
1 0 0


=



A I 14.
3
1 12 0
4 3 0 1 0
0 0 1


=



B I

15.
3
C I , Matrices are not compatible 16.
2 9
6 7
0 3


=



AC

202 CHAPTER 3 Linear Algebra
More Multiplication Practice
17. [ ]
1 0 2 2
1 0 2
1 0 2 2
a b
a c e a e
c d
b d f b f
e f

+

= =


+





18.
0
0
a b d b ad bc ab ba ad bc
c d c a cd dc db da ad bc
+
= =

+



19.
1 1
0 2 0 0
2 0 1 0
2 2
1 1 1 1 1 0 1
1 1
2 2 2

+


= =







20. [ ] [ ] 0 1 0
a b c
d e f d e f
g h k


=





21. [ ] [ ] 0 1 1 1 0
a b c
d e f



= [ ][ ] 1 1 0 d e f not possible

22. [ ] [ ] [ ]
1 1
1 1 0
1 1
a b
a c b d a c b d c d
e f



= + + = + + +







Rows and Columns in Products
23. (a) 5 columns (b) 4 rows (c) 6 4

Which Rules Work for Matrix Multiplication?
24. Counterexample:
A =
1 1
1 0



B =
2 1
0 1



(A +B)(A B) =
3 0 1 2
1 1 1 1


=
3 6
0 1




A
2
B
2
=
2 1 4 3 2 4
1 1 0 1 1 0

=




25. Counterexample:
Also due to the fact that AB BA for most matrices
A =
1 1
1 0



B =
2 1
0 1



(A +B)
2
=
9 0
4 1



AB =
2 0
2 1



A
2
+2AB +B
2
=
2 1 2 0 4 3
2
1 1 2 1 0 1

+ +


=
10 2
5 0





SECTION 3.1 Matrices: Sums and Products 203
26. Proof (I +A)
2
=(I +A)(I +A) =I(I +A) +A(I +A) distributive property
=I
2
+IA +AI +A
2

=I +A +A +A
2
identity property
=I +2A +A
2


27. Proof (A +B)
2
=(A +B)(A +B) =A(A +B) +B(A +B) distributive property
=A
2
+AB +BA +B
2
distributive property

Find the Matrix
28. Set
1 2 3 2 4 0 0
3 4 3 2 4 0 0
a b a b a b
c d c d c d
+ +
= =

+ +


a +3b =0 a +3b =0 c +3d =0 c +3d =0
2a +4b =0 a +2b =0 2c 4d =0 c +2d =0
b =0 d =0
a =0 c =0
Therefore no nonzero matrix A will work.

29. B must be 3 2 Set
1 2 3 1 0
0 1 0 1 0
a b
c d
e f



=






a +2c +3e =1
c =0
b +2d +3f =0
d =1

B is any matrix of the form
1 3 2 3
0 1
e f
e f





for any real numbers e and f.

30. Set
1 2 2 0
4 1 1 4
a b
c d

=



a +2c =2 b +2d =0
4a +c =1 4b +d =4
Thus c =1, a =0 b =
8
7
, d =
4
7
and
8
2
7
4
1
7
a b
c d



=



.

Commuters
31.
0 1 0
0 0 1
a
a
a

=


so the matrix commutes with every 2 2 matrix.

a =1 3e
b =2 3f

204 CHAPTER 3 Linear Algebra


32.
1
1
k a b a kc b kd
k c d ka c kb d
+ +
=

+ +




1
1
a b k a bk ak b
c d k c kd ck d
+ +
=

+ +



Any matrix of the form

a b
b a



a, b R will commute with
1
1
k
k



.
a +kc =a +bk
k(c b) =0
c =b since k 0

b +kd =ak +b
k(d a) =0
d =a since k 0

Same results from
ka +c =c +kd
and kb +d =a +d


To check:

1 1
1 1
k a b a kb b ka a b k
k b a ka b kb a b a k
+ +
= =

+ +



33.
0 1
1 0
a b c d
c d a b

=




0 1
1 0
a b b a
c d d c

=



b c
a d
=
=


Any matrix of the form ,
a b
a b
b a




R will commute with
0 1
1 0



.

Products with Transposes
34. (a) [ ]
T
1
1 4 3
1

= =


A B (b) [ ]
T
1 1 1
1 1
4 4 4

= =


AB
(c) [ ]
T
1
1 1 3
4

= =


B A (d) [ ]
T
1 1 4
1 4
1 1 4

= =



BA

Reckoning
35. Let a
ij
, b
ij
, be the ijth elements of matrices A and B, respectively, for 1 i m, 1 j n.

A B = ( 1)
ij ij ij ij
a b a b = +

=A +(1)B

36. Let a
ij
, b
ij
be the ijth elements of matrices A and B respectively, for 1 i m, 1 j n.
A +B =
ij ij ij ij
a b a b + = +

from the commutative property of real numbers
=
ij ij
b a +


=
ij ij
b a +

= B +A




SECTION 3.1 Matrices: Sums and Products 205
37. Let a
ij
be the ijth element of matrix A and c and d be any real numbers.

(c +d)A = ( )
ij ij ij
c d a ca da + = +

from the distributive property of real numbers
=
ij ij ij ij
ca da c a d a + = +


=cA +dA

38. Let a
ij
, b
ij
be the ijth elements of A and B respectively, for 1 i m, 1 j n. Let c be any real
number. The result again follows from the distributive property of real numbers.

c(A +B) =
( ) ( )
ij ij ij ij
c a b c a b

+ +



=
ij ij ij ij
ca cb ca cb + = +


=
ij ij
c a c b +

=cA +cB

Properties of the Transpose
Rather than grinding out the proofs of Problems 3942, we make the following observations:
39.
( )
T
T
= A A. Interchanging rows and columns of a matrix two times reproduce the original matrix.

40. ( )
T
T T
+ = + A B A B . Add two matrices and then interchange the rows and columns of the
resulting matrix. You get the same as first interchanging the rows and columns of the matrices
and then adding.

41. ( )
T
T
k k = A A . Demonstrate that it makes no difference whether you multiply each element of
matrix A before or after rearranging them to form the transpose.

42. ( )
T
T T
= AB B A . This identity is not so obvious. Due to lack of space we verify the proof for
2 2 matrices. The verification for 3 3 and higher-order matrices follows along exactly the
same lines.

( )
11 12
21 22
11 12
21 22
11 12 11 12 11 11 12 21 11 12 12 22
21 22 21 22 21 11 22 21 21 12 22 22
T
11 11 12 21 21 11 22 21
11 12 12 22 21 12 22 22
a a
a a
b b
b b
a a b b a b a b a b a b
a a b b a b a b a b a b
a b a b a b a b
a b a b a b a b

=



=


+ +
= =

+ +

+ +
=

+ +

A
B
AB
AB
11 21 11 21 11 11 12 21 21 11 22 21 T T
12 22 12 22 11 12 12 22 21 12 22 22
b b a a a b a b a b a b
b b a a a b a b a b a b
+ +
= =

+ +

B A

Hence, ( )
T
T T
= AB B A for 2 2 matrices.

206 CHAPTER 3 Linear Algebra
Transposes and Symmetry
43. If the matrix
ij
a =

A is symmetric, then
ij ji
a a = . Hence
T
ji
a =

A is symmetric since
ji ij
a a = .

Symmetry and Products
44. We pick at random the two symmetric matrices

0 2
2 1

=


A ,
3 1
1 1

=


B ,
which gives

0 2 3 1 2 2
2 1 1 1 7 3

= =


AB .
This is not symmetric. In fact, if A, B are symmetric matrices, we have
( )
T
T T
= = AB B A BA ,
which says the only time the product of symmetric matrices A and B is symmetric is when the
matrices commute (i.e. = AB BA ).

Constructing Symmetry
45. We verify the statement that
T
+ A A is symmetric for any 2 2 matrix. The general proof
follows along the same lines.

11 12 11 21 11 12 21 T
21 22 12 22 21 12 22
2
2
a a a a a a a
a a a a a a a
+
+ = + =

+

A A ,
which is clearly symmetric.

More Symmetry
46. Let

11 12
21 22
31 32
a a
a a
a a


=



A .


Hence, we have

11 12
11 21 31 11 12 T
21 22
12 22 32 21 22
31 32
a a
a a a A A
a a
a a a A A
a a



= =





A A ,
2 2 2
11 11 21 31
12 11 12 21 22 31 32
21 11 12 21 22 31 32
2 2 2
22 12 22 32
.
A a a a
A a a a a a a
A a a a a a a
A a a a
= + +
= + +
= + +
= + +

Note
12 21
A A = , which means
T
AA is symmetric. We could verify the same result for 3 3
matrices.

SECTION 3.1 Matrices: Sums and Products 207
Trace of a Matrix
47. ( ) ( ) ( ) Tr Tr Tr + = + A B A B
( ) ( ) ( ) ( ) ( )
11 11 11 11
Tr Tr( )+Tr( )
nn nn nn nn
a b a b a a b b + = + + + + = + + + + + = A B A B .

48. ( ) ( ) ( )
11 11
Tr Tr
nn nn
c ca ca c a a c = + + = + + = A A

49.
( ) ( )
T
Tr Tr = A A . Taking the transpose of a (square) matrix does not alter the diagonal element,
so ( ) ( )
T
Tr Tr = A A .

50.
( ) [ ] [ ] [ ] [ ]
( ) ( )
( ) ( )
[ ] [ ] [ ] [ ] ( )
11 1 11 1 1 1
11 11 1 1 1 1
11 11 1 1 1 1
11 1 11 1 1 1
Tr
Tr
n n n nn n nn
n n n n nn nn
n n n n nn nn
n n n nn n nn
a a b b a a b b
a b a b a b a b
b a b a b a b a
b b a a b b a a
= + + + + + + + + + +
= + + + + + +
= + + + + + +
= + + + + + + + + + + =
AB
BA






Matrices Can Be Complex
51.
3 0
2
2 4 4
i
i i
+
+ =

+

A B 52.
3 1
8 4 5 3
i i
i i
+ +
=

+

AB

53.
1 3
4 1
i
i i

=


BA 54.
2
6 4 6
6 4 5 8
i i
i i
+
=



A

55.
1 2
2 3 2
i
i
i i
+
=

+

A 56.
1 2 2
2
6 4 5
i i
i
i
+
=


A B

57.
T
1 2
1
i
i i

=

+

B 58. ( ) Tr 2 i = + B

Real and Imaginary Components
59.
1 2 1 0 1 2
2 2 3 2 2 0 3
i i
i
i
+
= = +



A ,
1 1 0 0 1
2 1 0 1 2 1
i
i
i i

= = +

+

B

Square Roots of Zero
60. If we assume

a b
c d

=


A
is the square root of

0 0
0 0



,
then we must have

2
2
2
0 0
0 0
a b a b a bc ab bd
c d c d ac cd bc d
+ +
= = =

+ +

A ,
208 CHAPTER 3 Linear Algebra
which implies the four equations

2
2
0
0
0
0.
a bc
ab bd
ac cd
bc d
+ =
+ =
+ =
+ =


From the first and last equations, we have
2 2
a d = . We now consider two cases: first we assume
a d = . From the middle two preceding equations we arrive at 0 b = , 0 c = , and hence 0 a = ,
0 d = . The other condition, a d = , gives no condition on b and c, so we seek a matrix of the
form (we pick 1 a = , 1 d = for simplicity)

1 1 1 0
1 1 0 1
b b bc
c c bc
+
=

+

.
Hence, in order for the matrix to be the zero matrix, we must have
1
b
c
= , and hence

1
1
1
c
c


,
which gives

1 1
0 0 1 1
0 0
1 1
c c
c c



=





.

Zero Divisors
61. No, = AB 0 does not imply that = A 0 or = B 0. For example, the product

1 0 0 0
0 0 0 1




is the zero matrix, but neither factor is itself the zero matrix.

Does Cancellation Work?
62. No. A counterexample is:
0 0 1 2 0 0 0 0
0 1 0 4 0 1 0 4

=


since
1 2 0 0
0 4 0 4



.

SECTION 3.1 Matrices: Sums and Products 209
Taking Matrices Apart
63. (a)
1 2 3
1 5 2
1 0 3
2 4 7


= =



A A A A ,
2
4
3


=



x
,

where
1
A ,
2
A , and
3
A are the three columns of the matrix A and
1
2 x = ,
2
4 x = ,
3
3 x =
are the elements of x
,
. We can write

1 1 2 2 3 3
1 5 2 2 1 2 5 4 2 3 1 5 2
1 0 3 4 1 2 0 4 3 3 2 1 4 0 3 3
2 4 7 3 2 2 4 4 7 3 2 4 7
. x x x
+ +

= = + + = + +

+ +

= + +
Ax
A A A
,


(b) We verify the fact for a 3 3 matrix. The general n n case follows along the same lines.

11 12 13 1 11 1 12 2 13 3 11 1 12 2 13 3
21 22 23 2 21 1 22 2 23 3 21 1 22 2 23 3
31 32 33 3 31 1 32 2 33 3 31 1 32 2 33 3
1
a a a x a x a x a x a x a x a x
a a a x a x a x a x a x a x a x
a a a x a x a x a x a x a x a x
x
+ +

= = + + = + +

+ +

=
Ax
,
11 12 13
21 2 22 3 23 1 1 2 2 3 3
31 32 33
a a a
a x a x a x x x
a a a


+ + = + +



A A A


Diagonal Matrices
64.
11
22
0 0
0 0
0
0 0
nn
a
a
a



=



A

,

11
22
0 0
0 0
0
0 0
nn
b
b
b



=



B

.
By multiplication we get

11 11
22 22
0 0
0 0
0
0 0
nn nn
a b
a b
a b



=



AB

,
which is a diagonal matrix.

210 CHAPTER 3 Linear Algebra
65. By multiplication of the general matrices, and commutativity of resulting individual elements, we
have

11 11 11 11
22 22 22 22
0 0 0 0
0 0 0 0
0 0
0 0 0 0
nn nn nn nn
a b b a
a b b a
a b a b



= = =




AB BA



. .

.
However, it is not true that a diagonal matrix commutes with an arbitrary matrix.

Upper Triangular Matrices
66. (a) Examples are

1 2
0 3



,
1 3 0
0 0 5
0 0 2





,
2 7 9 0
0 3 8 1
0 0 4 2
0 0 0 6






.

(b) By direct computation, it is easy to see that all the entries in the matrix product

11 12 13 11 12 13
22 23 22 23
33 33
0 0
0 0 0 0
a a a b b b
a a b b
a b


=



AB
below the diagonal are zero.

(c) In the general case, if we multiply two upper-triangular matrices, it yields

11 12 13 1 11 12 13 1 11 12 13 1
22 23 2 22 23 2 22 23 2
33 3 33 3 33 3
0 0 0
0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
n n n
n n n
n n n
nn nn nn
a a a a b b b b c c c c
a a a b b b c c c
a a b b c c
a b c



= =




AB




.
We wont bother to write the general expression for the elements
ij
c ; the important point
is that the entries in the product matrix that lie below the main diagonal are clearly zero.

Hard Puzzle
67. If

a b
c d

=


M
is a square root of

0 1
0 0

=


A ,
then

2
0 1
0 0

=


M ,
SECTION 3.1 Matrices: Sums and Products 211
which leads to the condition
2 2
a d = . Each of the possible cases leads to a contradiction. How-
ever for matrix B because

1 0 1 0 1 0
1 1 0 1

=




for any , we conclude that

1 0
1

=


B
is a square root of the identity matrix for any number .

Orthogonality
68.
1 1
2
0 3
k




=1 1 +2 k +3 0 =0
2k =1
k =
1
2

69.
1
2 0
4
k
k




=k 1 +2 0 +k 4 =0
5k =0
k =0

70.
2
1
0 2
3
k
k




=k 1 +0 2 +k
2
3 =0
3k
2
+k =0
k(5k +1) =0
k =0,
1
3


71.
2
1 1
2 1
1 k




=1 (1) +2 1 +k
2
(1) =0
1 k
2
=0
k =1

Orthogonality Subsets
72. Set
1
0
1
a
b
c




=0
a 1 +b 0 +c 1 =0
a +c =0
c = a
Orthogonal set = : ,
a
b a b
a









R



73. Set
1
0
1
a
b
c




=0 to get c =a

Set
2
1
0
a
b
c




=0
2a +b 1 +c 0 =0
2a +b =0
b =2a
Orthogonal set = 2 :
a
a a
a









R

212 CHAPTER 3 Linear Algebra

74. Set
1
0
1
a
b
c




=0 to get c =a

Set
2
1
0
a
b
c




=0 to get b =2a

Set
3
4
5
a
b
c




=0 to get a 3 +b 4 +c 5 =0
3a 8a 5a =0
a =0


0
0
0








is the orthogonal set
75. Set
1
0
1
a
b
c




=0 to get c =a

Set
2
1
0
a
b
c




=0 to get b =2a

Set
0
1
2
a
b
c






=a 0 +b(1) +c(2)
=b +2c =0
=2a 2a =0

2 :
a
a a
a









R is the orthogonal set

Dot Products
76. [ ] [ ] 2, 1 1, 2 0 = , orthogonal

77. [ ] [ ] 3, 0 2, 1 6 = , not orthogonal. Because the dot product is negative, this means the angle
between the vectors is greater than 90.

78. [ ] [ ] 2, 1, 2 3, 1, 0 5 = . Because the dot product is positive, this means the angle between the
vectors is less than 90.

79. [ ] [ ] 1, 0, 1 1, 1, 1 0 = , orthogonal

80. [ ] [ ] 5, 7, 5, 1 2, 4, 3, 3 0 = , orthogonal

81. [ ] [ ] 7, 5, 1, 5 4, 3, 2, 3 30 = , not orthogonal

Lengths
82. Introducing the two vectors [ ] , a b = u
,
, [ ] , c d = v
,
, we have the distance d between the heads of
the vectors
( ) ( )
2 2
d a c b d = + .
But we also have
( ) ( ) ( ) ( )
2 2 2
a c b d = = + u v u v u v
, , , , , ,
,
so d = u v
, ,
. This proof can be extended easily to u
,
and v
,
in
n
R .

SECTION 3.1 Matrices: Sums and Products 213
Geometric Vector Operations
83. + A C lies on the horizontal axis, from 0 to 2.
[ ] [ ] [ ] 1,2 3, 2 2,0 + = + = A C

1 3 1
1
2
3
1
3
2
2
3 2
A = 1 2 ,
C= 3 2 ,
A C + = 2, 0


84. [ ] [ ] [ ]
1 1
1, 2 3, 1 2.5, 2
2 2
+ = + = A B

1 3 1
1
2
3
1
3
2
2
3 2
A = 1 2 ,
B A + =
1
2
25 2 . ,
B = 3 1 ,


85. 2 A Blies on the horizontal axis, from 0 to 7.
4 8 2
1
2
3
1
3
2
2
4 6
A = 1 2 ,
B = 3 1 , A B = 2 7, 0


Triangles
86.
If [ ] 3, 2 and [ ] 2, 3 are two sides of a triangle,
their difference [ ] 1, 1 or [ ] 1, 1 is the third
side. If we compute the dot products of these
sides, we see
[ ] [ ] 3, 2 2, 3 12 = ,
[ ] [ ] 3, 2 1, 1 1 = ,
[ ] [ ] 2, 3 1, 1 1 = .

1 3 1
1
2
3
1
3
2
2
3 2
2, 3
3 2 ,

None of these angles are right angles, so the triangle is not a right triangle (see figure).

214 CHAPTER 3 Linear Algebra
87. [ ] [ ] 2, 1, 2 1, 0, 1 0 = so in 3-space these vectors form a right angle, since dot product is zero.

Properties of Scalar Products
We let [ ]
1 n
a a = a
,
, [ ]
1 n
b b = b
,
, and [ ]
1 n
c c = c
,
for simplicity.
88. True. [ ] [ ] [ ] [ ]
1 1 1 1 1 1

n n n n n n
a a b b a b a b b a b a = = = = a b b a
, ,
, ,
.

89. False. Neither
( )
a b c
,
, ,
nor
( )
a b c
,
, ,
. Invalid operation, since problem asks for the scalar
product of a vector and a scalar, which is not defined.

90. True.

( ) [ ] [ ]
[ ] [ ] ( )
1 1 1 1 1 1
1 1


n n n n n n
n n
k ka ka b b ka b ka b a kb a kb
a a kb kb k
= = + + = + +
= =
a b
a b
,
,

,
,


91. True.

( ) [ ] [ ] ( ) ( )
( ) ( )
1 1 1 1 1 1
1 1 1 1
n n n n n n
n n n n
a a b c b c a b c a b c
a b a b a c a c
+ = + + = + + + +
= + + + + + = +
a b c
a b a c
,
, ,

,
, , ,



Directed Graphs
92. (a)
0 1 1 0 1
0 0 1 0 0
0 0 0 0 1
0 0 0 0 0
0 0 1 1 0



=




A
(b)
2
0 0 2 1 1
0 0 0 0 1
0 0 1 1 0
0 0 0 0 0
0 0 0 0 1



=




A
The ijth entry in
2
A gives the number of paths of length 2 from node i to node j.

Tournament Play
93. The tournament graph had adjacency matrix

0 1 1 0 1
0 0 0 1 1
0 1 0 0 1
1 0 1 0 1
0 0 0 0 0



=




T .
Ranking players by the number of games won means summing the elements of each row of T,
which in this case gives two ties: 1 and 4, 2 and 3, 5. Players 1 and 4 have each won 3 games.
Players 2 and 3 have each won 2 games. Player 5 has won none.


SECTION 3.1 Matrices: Sums and Products 215
Second-order dominance can be determined from

2
0 1 0 1 2
1 0 1 0 1
0 0 0 1 1
0 2 1 0 2
0 0 0 0 0



=




T
For example,
2
T tells us that Player 1 can dominate Player 5 in two second-order ways (by
beating either Player 2 or Player 4, both of whom beat Player 5). The sum

2
0 2 1 1 3
1 0 1 1 2
0 1 0 1 2
1 2 2 0 3
0 0 0 0 0



+ =




T T ,
gives the number of ways one player has beaten another both directly and indirectly. Reranking
players by sums of row elements of
2
+ T T can sometimes break a tie: In this case it does so and
ranks the players in order 4, 1, 2, 3, 5.

Suggested Journal Entry
94. Student Project
216 CHAPTER 3 Linear Algebra


3.2

Systems of Linear Equations



Matrix-Vector Form
1.
1 2 1
2 1 0
3 2 1
x
y



=





2.
1
2
3
4
1 2 1 3 2
1 3 3 0 1
i
i
i
i




=






Augmented matrix
1 2 1
2 1 0
3 2 1


=



Augmented matrix
1 2 1 3 2
1 3 3 0 1

=




3.
1 2 1 1
1 3 3 1
0 4 5 3
r
s
t


=



4. [ ]
1
2
3
1 2 3 0
x
x
x


=




Augmented matrix
1 2 1 1
1 3 3 1
0 4 5 3


=



Augmented matrix [ ] 1 2 3 | 0 =

Solutions in
2
R
5. (A) 6. (B) 7. (C) 8. (B) 9. (A)

A Special Solution Set in
3
R
10. The three equations

1
2 2 2 2
3 3 3 3
x y z
x y z
x y z
+ + =
+ + =
+ + =

are equivalent to the single plane 1 x y z + + = , which can be written in parametric form by letting
y s = , z t = . We then have the parametric form ( ) { }
1 , , : , any real numbers s t s t s t .

Reduced Row Echelon Form
11. RREF 12. Not RREF (not all zeros above leading ones)

13. Not RREF (leading nonzero element in row 2 is not 1; not all zeros above the leading ones)

14. Not RREF (row 3 does not have a leading one, nor does it move to the right; plus pivot columns
have nonzero entries other than the leading ones)

15. RREF 16. Not RREF (not all zeros above leading ones)

17. Not RREF (not all zeros above leading ones)

18. RREF 19. RREF
SECTION 3.2 Systems of Linear Equations 217
Gauss-Jordan Elimination
20. Starting with
1 3 8 0
0 1 2 1
0 1 2 4





( )
3 3 2
1 R R R

= +
1 3 8 0
0 1 2 1
0 0 0 3








3 3
1
3
R R

=
1 3 8 0
0 1 2 1
0 0 0 1





.

This matrix is in row echelon form. To further reduce it to RREF we carry out the following
elementary row operations
( )
1 1 2
3 R R R

= + , ( )
2 2 3
1 R R R

= +

1 0 2 0
0 1 2 0 RREF
0 0 0 1






.
Hence, we see the leading ones in this RREF form are in columns 1, 2, and 4, so the pivot
columns of the original matrix are columns 1, 2, and 4 shown in bold and underlined as follows:

8
2
2





1 3 0
0 1 1
0 1 4
.

21.
0 0 2 2 2
2 2 6 14 4




1 2
R R
2 2 6 14 4
0 0 2 2 2





1 1
1
2
R R

=
1 1 3 7 2
0 0 2 2 2



2 2
1
2
R R

=
1 1 3 7 2
0 0 1 1 1


.

The matrix is in row echelon form. To further reduce it to RREF we carry out the following
elementary row operation.
( )
1 1 2
3 R R R

= +
1 1 0 4 5
RREF
0 0 1 1 1





The pivot columns of the original matrix are first and third.

0 2 2
2 14 4



0 2
2 6
.

218 CHAPTER 3 Linear Algebra
22.
1 0 0
2 4 6
5 8 12
0 8 12







( )
( )
2 2 1
3 3 1
2
5
R R R
R R R

= +
= +

1 0 0
0 4 6
0 8 12
0 8 12







2 2
1
4
R R

=
1 0 0
3
0 1
2
0 8 12
0 8 12











( )
( )
3 3 2
4 4 2
8
8
R R R
R R R

= +
= +

1 0 0
3
0 1
RREF
2
0 0 0
0 0 0







.
This matrix is in both row echelon form and RREF form.

0
6
12
12






1 0
2 4
5 8
0 8

The pivot columns of the original matrix are the first and second columns.

23.
1 2 3 1
3 7 10 4
2 4 6 2






( )
( )
2 2 1
3 3 1
3
2
R R R
R R R

= +
= +
,
1 2 3 1
0 1 1 1 rowechelonform
0 0 0 0






.
The matrix is in row echelon form. To further reduce it to RREF, we carry out the following
elementary row operation.
( )
1 1 2
2 R R R

= +
1 0 1 1
0 1 1 1 RREF
0 0 0 0







The pivot columns of the original matrix are first and second.
3 1
10 4
6 2





1 2
3 7
2 4
.

Solving Systems
24.
1 1 4
1 1 0


( )
2 2 1
1 R R R

= +
1 1 4
0 2 4





*
2 2
1
2
R R =
1 1 4
0 1 2





( )
1 1 2
1 R R R

= +
1 0 2
0 1 2





unique solution; 2 x = , 2 y =

SECTION 3.2 Systems of Linear Equations 219
25.
2 1 0
1 1 3





1 2
R R
1 1 3
2 1 0


( )
2 2 1
2 R R R

= +
1 1 3
0 1 6





( )
1 1 2
1 R R R

= + RREF
1 0 3
0 1 6





unique solution; 3 x = , 6 y =

26.
1 1 1 0
0 1 1 1



( )
1 1 2
1 R R R

= + RREF
1 0 0 1
0 1 1 1





arbitrary (infinitely many solutions); 1 x = , 1 y z = , z arbitrary

27.
2 4 2 0
5 3 0 0




1 1
1
2
R R

=
1 2 1 0
5 3 0 0



( )
2 2 1
5 R R R

= +
1 2 1 0
0 7 5 0





2 2
1
7
R R

=
1 2 1 0
5
0 1 0
7



( )
1 1 2
2 R R R

= + RREF
3
1 0 0
7
5
0 1 0
7







nonunique solutions;
3
7
x z = ,
5
7
y z = , z is arbitrary

28.
1 1 2 1
2 3 1 2
5 4 2 4






( )
( )
2 2 1
3 3 1
2
5
R R R
R R R

= +
= +

1 1 2 1
0 5 5 0
0 9 12 1






2 2
1
5
R R

=
1 1 2 1
0 1 1 0
0 9 12 1








( )
1 1 2
3 3 2
9
R R R
R R R

= +
= +

1 0 1 1
0 1 1 0
0 0 3 1






3 3
1
3
R R

=
1 0 1 1
0 1 1 0
1
0 0 1
3









( )
1 1 3
2 2 3
1
R R R
R R R

= +
= +
RREF
2
1 0 0
3
1
0 1 0
3
1
0 0 1
3










unique solution;
2
3
x = ,
1
3
y = ,
1
3
z =
220 CHAPTER 3 Linear Algebra
29.
1 4 5 0
2 1 8 9


( )
2 2 1
2 R R R

= +
1 4 5 0
0 9 18 9





2 2
1
9
R R

=
1 4 5 0
0 1 2 1




( )
1 1 2
4 R R R

= + RREF
1 0 3 4
0 1 2 1






nonunique solutions;
1 3
4 3 x x = ,
2 3
1 2 x x = + ,
3
x is arbitrary

30.
1 0 1 2
2 3 5 4
3 2 1 4





( )
( )
2 2 1
3 3 1
2
3
R R R
R R R

= +
= +

1 0 1 2
0 3 3 0
0 2 4 2






2 2
1
3
R R

=
1 0 1 2
0 1 1 0
0 2 4 2




( )
3 3 2
2 R R R

= +
1 0 1 2
0 1 1 0
0 0 2 2






3 3
1
2
R R

=
1 0 1 2
0 1 1 0
0 0 1 1





( )
1 1 3
2 2 3
1 R R R
R R R

= +
= +
RREF
1 0 0 1
0 1 0 1
0 0 1 1







unique solution; 1 x y z = = =

31.
1 1 1 0
1 1 0 0
1 2 1 0






( )
( )
2 2 1
3 3 1
1
1
R R R
R R R

= +
= +

1 1 1 0
0 2 1 0
0 3 2 0







2 2
1
2
R R

=
1 1 1 0
1
0 1 0
2
0 3 2 0




( )
1 1 2
3 3 2
3
R R R
R R R

= +
= +

1
1 0 0
2
1
0 1 0
2
1
0 0 0
2







( )
3 3
2 R R

=
1
1 0 0
2
1
0 1 0
2
0 0 1 0









1 1 3
2 2 3
1
2
1
2
R R R
R R R


= +


= +
RREF
1 0 0 0
0 1 0 0
0 0 1 0







unique solution; 0 x y z = = =


SECTION 3.2 Systems of Linear Equations 221
32.
1 1 2 0
2 1 1 0
4 1 5 0





( )
( )
2 2 1
3 3 1
2
4
R R R
R R R

= +
= +

1 1 2 0
0 3 3 0
0 3 3 0









2 2
1
3
R R

=
1 1 2 0
0 1 1 0
0 3 3 0






( )
( )
1 1 2
3 3 2
1
3
R R R
R R R

= +
= +
RREF
1 0 1 0
0 1 1 0
0 0 0 0







nonunique solutions; , x z y z = = , z is arbitrary

33.
1 1 2 1
2 1 1 2
4 1 5 4





( )
( )
2 2 1
3 3 1
2
4
R R R
R R R

= +
= +

1 1 2 1
0 3 3 0
0 3 3 0









2 2
1
3
R R

=
1 1 2 1
0 1 1 0
0 3 3 0






( )
( )
1 1 2
3 3 2
1
3
R R R
R R R

= +
= +
RREF
1 0 1 1
0 1 1 0
0 0 0 0







nonunique solutions; 1 x z = , y z = , z is arbitrary

34.
2 2
2 4 3 0
6 4 2
4
x y z
x y z
x y z
x y
+ + =
=
+ =
=

1 2 1 2
2 4 3 0
1 6 4 2
1 1 0 4







*
2 1 2
*
4 2 4
2 R R R
R R R
= +
= +

1 2 1 2
2 4 3 2
0 8 5 2
0 3 1 2











*
3 2 3
*
4 2 4
3
8
R R R
R R R
= +
= +

1 2 1 2
0 8 5 2
0 0 8 2
7 5
0 0
8 4










*
2 2
*
3 3
*
4 4
1
8
1
8
8
7
R R
R R
R R
=
=
=

1 2 1 2
5 1
0 1
8 4
1
0 0 1
4
10
0 0 1
7












Clearly inconsistent at this point so the RREF =
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1








222 CHAPTER 3 Linear Algebra
35.

2 2
4
2 2 0
3 2
x x z
x y
x y z
y z
+ + =
=
+ =
+ =



1 2 1 2
1 1 0 4
2 1 2 0
0 3 1 2



*
2 1 2
*
3 1 3
2
R R R
R R R
= +
= +

1 2 1 2
1 3 1 2
0 5 0 4
0 3 1 2









*
3 3
*
4 2 4
1
5
R R
R R R
=
= +

1 2 1 2
1 3 1 2
4
0 1 0
5
0 0 0 0










*
4 3
R R
1 2 1 2
4
0 1 0
5
0 3 1 2
0 0 0 0












*
1 2 1
*
3 2 3
2
3
R R R
R R R
= +
= +
2
1 0 1
5
4
0 1 0
5
22
0 0 1
5
0 0 0 0










*
1 1 3
*
3 3
R R R
R R
= +
=

24
1 0 0
5
4
0 1 0
5
22
0 0 1
5
0 0 0 0











There is a unique solution: x =
24 4
,
5 5
y = and z =
22
5
.
36.
3 4
2 3 4
2 4 1
3 2
x x x
x x x
+ =
+ =



1 0 2 4 1
0 1 1 3 2


is in RREF
infinitely many solutions x
1
=2r +4s +1
x
2
=r +3s +2, r, s R
x
3
=r, x
4
=s

Using the Nonhomogenous Principle
37. In Problem 24,
2
2



is a unique solution so W =
{ }
0
,
and
x
,
=
2
2



+0
,


SECTION 3.2 Systems of Linear Equations 223
38. In Problem 25,
3
6



is a unique solution so W =
{ }
0
,
and
x
,
=
3
6



+0
,


39. In Problem 26, infinitely many solutions,
1
1 z
z




, and W =
0
1 :
1
r









R
hence x
,
=
1 0
1 1
0 1
r


+



for any r R

40. In Problem 27, (already a homogeneous system), infinitely many solutions,
W =
3
5 :
7
r r









R and x
,
=
0
0
0





+
3
5
7
r





for any r R

41. In Problem 28,
2
3
1
3
1
3








is a unique solution so that W =
{ }
0
,
and = x
,
2
3
1
3
1
3








+0
,


42. In Problem 29, infinitely many solutions: x
1
=4 3x
3

x
2
=1 +2x
3
,
x
3
arbitrary
W =
3
2 :
1
r r









R

4
1
0


=



x
,
+
3
2
1
r





for any r R

43. In Problem 30, unique solution
1
1
1





so W =
{ }
0
,
and

1
1
1


= +



x 0
,
,


224 CHAPTER 3 Linear Algebra
44. In Problem 31, unique solution
0
0
0





so W =
{ }
0
,
and
= + = x 0 0 0
, , ,
,
.

45. In Problem 32, infinitely many solutions x =z, y =z, z arbitrary, so
W =
1
1 :
1
r r









R

1
= 1
1
r


+



x 0
,
,
for any r R

46. In Problem 33, nonunique solutions: x =1 z, y =z, z arbitrary,

W =
1
1 :
1
r r









R

1 1
0 1
0 1
r


= +



x
,
for any r R

47. In Problem 34, W =
0
0
:
0
1
r r











R . However, the system is inconsistent so that there is no x
p

and no general solution.

48. In Problem 35, there is a unique solution: x =
24
5
, y =
4
5
, z =
22
5
so
W =
{ }
0
,
and
24
5
4
5
22
5




= +




x 0
,
,
.






SECTION 3.2 Systems of Linear Equations 225
49. In Problem 36, there are infinitely many solutions: x
1
=1 2x
3
+4x
4
, x
2
=2 x
3
+3x
4
,
x
3
is arbitrary, x
4
is arbitrary.
W =
2 4
1 3
: ,
1 0
0 1
r s r s



+






R ,
1 2 4
2 1 3
0 1 0
0 0 1
r s


= + +



x
,
for , r s R

The RREF Example
50. Starting with the augmented matrix, we carry out the following steps

1 0 2 0 1 4 8
0 2 0 2 4 6 6
0 0 1 0 0 2 2
3 0 0 1 5 3 12
0 2 0 0 0 0 6













( )
2 2
4 4 1
1
2
3
R R
R R R

=
= +

1 0 2 0 1 4 8
0 1 0 1 2 3 3
0 0 1 0 0 2 2
0 0 6 1 2 9 12
0 2 0 0 0 0 6












(We leave the next steps for the reader)
RREF =
1 0 0 0 1 0 4
0 1 0 0 0 0 3
0 0 1 0 0 2 2
0 0 0 1 2 3 0
0 0 0 0 0 0 0










More Equations Than Variables
51. Converting the augmented matrix to RREF yields

3 5 0 1 1 0 0 2
3 7 3 8 0 1 0 1
0 5 0 5 0 0 1 3
0 2 3 7 0 0 0 0
1 4 1 1 0 0 0 0








consistent system; unique solution 2, 1 x y = = , 3 z = .

Consistency
52. A homogeneous system = Ax 0
,
,
always has at least one solution, namely the zero vector = x 0
,
,
.

226 CHAPTER 3 Linear Algebra
Homogeneous Systems
53. The equations are

2 5 0
2 0
w x z
y z
+ =
+ =

If we let and x r z s = = , we can solve 2 y s = , 2 5 w r s = . The solution is a plane in
4
R given
by

2 5 2 5
1 0
2 0 2
0 1
w r s
x r
r s
y s
z s



= = +



,
for r, s any real numbers.

54. The equations are

2 0
0
x z
y
+ =
=

If we let , z s = we have 2 x s = and hence the solution is a line in
3
R given by

2 2
0 0
1
x s
y s
z s


= =



.

55. The equation is

1 2 3 4
4 3 0 0 x x x x + + = .
If we let
2
x r = ,
3
x s = ,
4
x t = , we can solve

1 2 3
4 3 4 3 x x x r s = = .
Hence

1
2
3
4
4 3 4 3 0
1 0 0
0 1 0
0 0 1
x r s
x r
r s t
x s
x t



= = + +




where r, s, t are any real numbers.

Making Systems Inconsistent
56.
1 0 3
0 2 4
1 0 5






*
2 2
*
3 1 3
1

2
R R
R R R
=
= +

1 0 3
0 1 2
0 0 2






*
3 3
1
2
R R =
1 0 3
0 1 2
0 0 1







Rank =3 because every column is a pivot column.

SECTION 3.2 Systems of Linear Equations 227
57.
4 5
1 6
3 1





Rank =2


4 5
1 6
3 1
a
b
c






2 1
R R
1 6
4 5
3 1
b
a
c






*
2 1 2
*
3 1 3
4 +R
3
R R
R R R
=
= +

1 0
0 5 5
0 1 3
b
b a
b c


+

+




*
3 3
2 3
R R
R R
=


1 6
0 1 3
0 5 5
b
b c
b a


+


*
3 2 3
5 R R R = +
1 6
0 1 3
0 0 20 5
b
b c
a b c


+


Thus the system is inconsistent for all vectors
a
b
c





for which a 20b +5c 0.

58. Find the RREF:
1 2 1
1 0 3
0 1 2





1 2
*
3 3
R R
R R

=

1 0 3
1 2 1
0 1 2





*
2 1 2
R R R = +
1 0 3
0 2 2
0 1 2







*
2 2
1
2
R R =
1 0 3
0 1 1
0 1 2





*
3 2 3
R R R = +
1 0 3
0 1 1
0 0 1







*
1 3 1
*
2 3 2
*
3 3
3 R R R
R R R
R R
= +
= +
=

1 0 0
0 1 0
0 0 1





rank A =3

59. Find the RREF:
1 1 2
2 1 1
4 1 5





*
2 1 2
*
3 1 3
2
4
R R R
R R R
= +
= +
1 1 2
0 3 3
0 3 3









*
3 2 3
*
2 3
1
3
R R R
R R
= +
=

1 1 2
0 1 1
0 0 0





rank A =2

For arbitrary a, b, and c:

1 1 2
2 1 1
4 1 5
a
b
c





*
2 1 2
*
3 1 3
2
4
R R R
R R R
= +
= +
1 1 2
0 3 3 2
0 3 3 4
a
a b
a c


+

+


228 CHAPTER 3 Linear Algebra

*
3 2 3
R R R = +
1 1 2
0 3 3 2
0 0 0 2
a
a b
a b c


+

+



Any vector
a
b
c





for which 2a b +c 0

60.
1 1 1
1 1 0
1 2 1






*
2 1 2
*
3 1 3
R R R
R R R
= +
= +
1 1 1
0 2 1
0 3 2





*
2 2
1
2
R R =

1 1 1
1
0 1
2
0 3 2






*
1 2 1
*
3 2 3
3
R R R
R R R
= +
= +
1
1 0
2
1
0 1
2
2
0 0
3









*
3 2
3
2
R R =

1
1 0
2
1
0 1
8
0 0 1











*
1 3 1
*
2 3 2
1
2
1
8
R R R
R R R
= +
= +

1 0 0
0 1 0
0 0 1





rank A =3

Seeking Consistency
61. 4 k

62. Any k will produce a consistent system

63. 1 k

64. The system is inconsistent for all k because the last two equations are parallel and distinct.

65.
1 0 0 1 2
0 2 4 0 6
1 1 2 1 1
2 2 4 2 k







*
2 2
*
3 1 3
*
4 1 4
1
2
2
R R
R R R
R R R
=
= +
= +

1 0 0 1 2
0 1 2 0 3
0 1 2 0 3
0 2 4 0 4 k





+




*
3 2 3
*
4 1 4
2
R R R
R R R
= +
= +

1 0 0 1 2
0 1 2 0 3
0 0 0 0 0
0 0 0 0 10 k





+


Consistent if k =10.
SECTION 3.2 Systems of Linear Equations 229
Not Enough Equations
66. a.
2 1 0 0 3
1 1 1 1 3
2 3 4 4 9





1 2
R R

1 1 1 1 3
2 1 0 0 3
2 3 4 4 9








*
2 1 2
*
3 1 3
2
2
R R R
R R R
= +
= +

1 1 1 1 3
0 3 2 2 3
0 1 2 2 3





2 3
R R

1 1 1 1 3
0 1 2 2 3
0 3 2 2 3









*
3 2 3
3 R R R = +

1 0 1 1 6
0 1 2 2 3
0 0 8 8 12







*
3 3
1
8
R R =

1 0 1 1 6
0 1 2 2 3
3
0 0 1 1
2











This matrix is in row-echelon form and has 3 pivot colums Rank =3
Consequently, there are infinitely many solutions because it represents a consistent
system.

b.
2 1 0 0 3
1 1 1 1 3
1 2 1 1 6





1 2
R R

1 1 1 1 3
2 1 0 0 3
1 2 1 1 6








*
2 1 2
*
3 1 2
2 R R R
R R R
= +
= +

1 1 1 1 3
0 3 2 2 3
0 3 2 2 9







1 1 1 1 3
0 3 2 2 3
0 0 0 0 6








Clearly inconsistent, no solutions.

Not Enough Variables
67. Matrices with the following RREFs


1 0
0 1
0 0 0
0 0 0
a
b






,
1 0 1
0 0 0
0 0 0
0 0 0






and
1 0
0 0
0 0 0
0 0 0
a
b






, where a and b are nonzero real numbers,

will have, respectively, a unique solution, infinitely many solutions, and no solutions.

True/False Questions
68. a) False.
1 2
3 0



and
1 0
0 2



have the same RREF.
b) False. A =
1
0



has rank 1 [ ]
1 2
0 1
a

=



1 2
0 1
a
a
=
=
contradiction
no solutions
230 CHAPTER 3 Linear Algebra
c) False. Consider the matrix
1 1
1 1

=


A
,
and
1
2

=


b
,

Then
1 1 1
1 1 2



has RREF
1 1 1
0 0 1



so the system is inconsistent.
However, the system = Ax c
, ,
where
2
2

=


c
,
is consistent.

Equivalence of Systems
69. Inverse of
i j
R R : The operation that puts the system back the way it was is
j i
R R . In other
words, the operation
3 1
R R will undo the operation
1 3
R R .
Inverse of
i i
R cR = : The operation that puts the system back the way it was is
1
i i
R R
c
= . In other
words, the operation
1 1
1
3
R R = will undo the operation
1 1
3 R R = .
Inverse of
i i j
R R cR = + : The operation that puts the system back is
i i j
R R cR = . In other words
i i j
R R cR = will undo the operation
i i j
R R cR = + .This is clear because if we add
j
cR to row i
and then subtract
j
cR from row i, then row i will be unchanged. For example,

1
2
1 2 3
2 1 1
R
R



,
1 1 2
2 R R R

= + ,
5 4 5
2 1 1



, ( )
1 1 2
2 R R R

= + ,
1 2 3
2 1 1



.

Homogeneous versus Nonhomogeneous
70. For the homogeneous equation of Problem 32, we can write the solution as

1
1 ,
1
c c


=



h
x
,
R
where c is an arbitrary constant.

For the nonhomogeneous equation of Problem 33, we can write the solution as

1 1
1 0 ,
1 0
x
y c c
z


= = +



x

R.

In other words, the general solution of the nonhomogeneous algebraic system, Problem 33, is the
sum of the solutions of the associated homogeneous equation plus a particular solution.

Solutions in Tandem
71. There is nothing surprising here. By placing the two right-hand sides in the last two columns of
the augmented matrix, the student is simply organizing the material effectively. Neither of the last
two columns affects the other column, so the last two columns will contain the respective
solutions.



SECTION 3.2 Systems of Linear Equations 231
Tandem with a Twist
72. (a) We place the right-hand sides of the two systems in the last two columns of the
augmented matrix

1 1 0 3 5
0 2 1 2 4



.
Reducing this matrix to RREF, yields

1
1 0 2 3
2
1
0 1 1 2
2






.
Hence, the first system has solutions
1
2
2
x z = + ,
1
1
2
y z = , z arbitrary, and the second
system has solutions
1
3
2
x z = + ,
1
2
2
y z = , z arbitrary.

(b) If you look carefully, you will see that the matrix equation

11 12
21 22
31 32
1 1 0 3 5
0 2 1 2 4
x x
x x
x x



=






is equivalent to the two systems of equations

11
21
31
12
22
32
1 1 0 3
0 2 1 2
1 1 0 5
.
0 2 1 4
x
x
x
x
x
x



=








=






We saw in part (a) that the solution of the system on the left was

11 31
1
2
2
x x = + ,
21 31
1
1
2
x x = ,
31
x arbitrary,
and the solution of the system on the right was

12 32
1
3
2
x x = + ,
22 32
1
2
2
x x = ,
32
x arbitrary.
Putting these solutions in the columns of our unknown matrix X and calling
31
x = ,
32
x = , we have

11 12
21 22
31 32
1 1
2 3
2 2
1 1
1 2
2 2
x x
x x
x x




+ +



= =






X .

232 CHAPTER 3 Linear Algebra
Two Thousand Year Old Problem
73. Letting
1
A and
2
A be the areas of the two fields in square yards, we are given the two equations

1 2
1 2
1800 square yards
2 1
1100 bushels
3 2
A A
A A
+ =
+ =

The areas of the two fields are 1200 and 600 square yards.

Computerizing
74. 2 2 Case. To solve the 2 2 system

11 1 12 2 1
21 1 22 2 2
a x a x b
a x a x b
+ =
+ =

we start by forming the augmented matrix

11 12 1
21 22 2


a a b
a a b

=



A b .
Step 1: If
11
1 a , factor it out of row 1. If
11
0 a = , interchange the rows and then factor the new
element in the 11 position out of the first row. (This gives a 1 in the first position of the first row.)
Step 2: Subtract from the second row the first row times the element in the 21 position of the new
matrix. (This gives a zero in the first position of the second row).
Step 3: Factor the element in the 22 position from the second row of the new matrix. If this
element is zero and the element in the 23 position is nonzero, there are no solutions. If both this
element is zero and the element in the 23 position is zero, then there are an infinite number of
solutions. To find them write out the equation corresponding to the first row of the final matrix.
(This gives a 1 in the first nonzero position of the second row).
Step 4: Subtract from the first row the second row times the element in the 12 position of the new
matrix. This operation will yield a matrix of the form matrix

1
2
1 0
0 1
r
r




where
1 1
x r = ,
2 2
x r = . (This gives a zero in the second position of the first row.)

75. The basic idea is to formalize a strategy like that used in Example 3. The augmented matrix for
= Ax bis

11 12 13 1
21 22 23 2
31 32 33 3
a a a b
a a a b
a a a b





.
A pseudocode might begin:
1. To get a one in first place in row 1, multiply every element of row 1 by
11
1
a
.
2. To get a zero in first place in row 2, replace row 2 by

( )
21
row2 row 1 . a
.



SECTION 3.2 Systems of Linear Equations 233
Electrical Circuits
76. (a) There are four junctions in this multicircuit, and Kirchhoffs current law states that the
sum of the currents flowing in and out of any junction is zero. The given equations
simply state this fact for the four junctions
1
J ,
2
J ,
3
J , and
4
J , respectively. Keep in
mind that if a current is negative in sign, then the actual current flows in the direction
opposite the indicated arrow.

(b) The augmented system is

1 1 1 0 0 0 0
0 1 0 1 1 0 0
0 0 1 1 0 1 0
1 0 0 0 1 1 0


.
Carrying out the three elementary row operations, we can transform this system to RREF

1 0 0 0 1 1 0
0 1 0 1 1 0 0
0 0 1 1 0 1 0
0 0 0 0 0 0 0





.
Solving for the lead variables
1
I ,
2
I ,
3
I in terms of the free variables
4
I ,
5
I ,
6
I , we
have
1 5 6
I I I = + ,
2 4 5
I I I = + ,
3 4 6
I I I = + . In matrix form, this becomes

1
2
3
4 5 6
4
5
6
0 1 1
1 1 0
1 0 1
1 0 0
0 1 0
0 0 1
I
I
I
I I I
I
I
I



= + +







where
1
I ,
2
I , and
3
I are arbitrary. In other words, we need three of the six currents to
uniquely specify the remaining ones.

234 CHAPTER 3 Linear Algebra
More Circuit Analysis
77.
1 2 3
1 2 3
0
0
I I I
I I I
=
+ + =


78.
1 2 3 4
1 2 3 4
0
0
I I I I
I I I I
=
+ + + =


79.
1 2 3 4
1 2 5
3 4 5
0
0
0
I I I I
I I I
I I I
=
+ + =
+ =


80.
1 2 3
2 4 5
3 4 6
1 5 6
0
0
0
0
I I I
I I I
I I I
I I I
=
=
+ =
+ + =


Suggested Journal Entry I
81. Student Project

Suggested Journal Entry II
82. Student Project

SECTION 3.3 The Inverse of a Matrix 235


3.3

The Inverse of a Matrix



Checking Inverses
1.
( )( ) ( )( ) ( )( ) ( )( )
( )( ) ( )( ) ( )( ) ( )( )
5 1 3 2 5 3 3 5 5 3 1 3 1 0
2 1 1 2 2 3 1 5 2 1 2 5 0 1
+ +
= =

+ +




2.
( )( ) ( ) ( ) ( )
( )( ) ( ) ( ) ( )
1 1 1
1
2 0 4 2 4
0
2 4 1 0 4 2 4
2
2 0 1 1 0 1 1 1 1
2 0 0 2 0
4 4 4 2 4


+ +




= =



+ +






3. Direct multiplication as in Problems 12.

4. Direct multiplication as in Problems 12.

Matrix Inverses
5. We reduce

A I to RREF.

2 0 1 0
1 1 0 1




1 1
1
2
R R

=
1
1 0 0
2
1 1 0 1




( )
2 2 1
1 R R R

= +
1
1 0 0
2
1
0 1 1
2





.

Hence,
1
1
0
2
1
1
2



A .

6. We reduce

A I to RREF.

1 3 1 0
2 5 0 1



( )
2 2 1
2 R R R

= +
1 3 1 0
0 1 2 1




( )
2 2
1 R R

=
1 3 1 0
0 1 2 1




( )
1 1 2
3 R R R

= +
1 0 5 3
0 1 2 1


.

Hence,
1
5 3
2 1


A .

236 CHAPTER 3 Linear Algebra
7. Starting with

0 1 1 1 0 0
5 1 1 0 1 0
3 3 3 0 0 1


=




A I


1 2
R R
5 1 1 0 1 0
0 1 1 1 0 0
3 3 3 0 0 1






1 1
1
5
R R

=
1 1 1
1 0 0
5 5 5
0 1 1 1 0 0
3 3 3 0 0 1










( )
3 3 1
3 R R R

= +
1 1 1
1 0 0
5 5 5
0 1 1 1 0 0
18 12 3
0 0 1
5 5 5










1 1 2
3 3 2
1
5
18
5
R R R
R R R


= +


= +

2 1 1
1 0 0
5 5 5
0 1 1 1 0 0
6 18 3
0 0 1
5 5 5











3 3
5
6
R R

=
2 1 1
1 0 0
5 5 5
0 1 1 1 0 0
1 5
0 0 1 3
2 6









( )
1 1 3
2 2 3
2
5
1
R R R
R R R

= +
= +

1
1 0 0 1 0
3
1 5
0 1 0 2
2 6
1 5
0 0 1 3
2 6









.
Hence,
1
1
1 0
3
1 5
2
2 6
1 5
3
2 6



A .

SECTION 3.3 The Inverse of a Matrix 237
8. Interchanging the first and third rows, we get

1
1
0 0 1 1 0 0 1 0 0 0 0 1 0 0 1
0 1 0 0 1 0 0 1 0 0 1 0 so 0 1 0
1 0 0 0 0 1 0 0 1 1 0 0 1 0 0





= = =




A I I A A

9. Dividing the first row by k gives

1
1
1 0 0 0 0
0 0 1 0 0
0 1 0 0 1 0 0 1 0 0 1 0
0 0 1 0 0 1 0 0 1 0 0 1
k
k







= =








A I I A
Hence
1
1
0 0
0 1 0
0 0 1
k




=




A .


10.
1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0
1 1 0 0 1 0 0 1 1 1 1 0 0 1 1 1 1 0
0 2 1 0 0 1 0 2 1 0 0 1 0 2 1 0 0 1
1 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 2 1
0 1 1 1 1 0 0 1 0 1 1 1 0 1 0 1 1 1
0 0 1 2 2 1 0 0 1 2 2 1 0 0 1 2 2 1


=










A I

Hence
1
1 2 1
1 1 1
2 2 1



=



A .

11.
1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0
0 1 0 0 1 0 0 0 1 0 0 0 1 0
0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
k k





Hence
1
1 0 0 0
0 1 0
0 0 1 0
0 0 0 1
k


=



A .
238 CHAPTER 3 Linear Algebra

12.
1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 0
0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 1 0 1 1 0 1 0

1 1 1 0 0 0 1 0 0 1 0 1 1 0 1 0 0 0 1 0 0 1 0 0
1 0 0 2 0 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 1 0 0 1
1 0 0 1 1 1 0 0 1 0 0 0 2
0 1 0 1 1 0 1 0
0 0 1 0 0 1 0 0
0 0 0 1 1 1 0 1
















1
2 0 1 2 2 0 1
0 1 0 0 2 1 1 1 2 1 1 1
so
0 0 1 0 0 1 0 0 0 1 0 0
0 0 0 1 1 1 0 1 1 1 0 1





=




A

13. Starting with the augmented matrix

1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0
0 1 2 0 0 0 1 0 0 1 2 0 0 0 1 0
1 1 3 3 0 0 0 1 0 1 3 3 1 0 0 1
1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0
0 1 2 0 0 0 1 0 0 0 2 0 0 1 1 0
0 1 3 3 1 0 0 1 0 0 3 3 1 1 0 1




=














A I
1 0 0 0 1 0 0 0
1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0
0 1 0 0 0 1 0 0
1 1
1 1 0 0 1 0 0 0
0 0 1 0 0 0
2 2
2 2
1 3
0 0 3 3 1 1 0 1 0 0 0 3 1 1
2 2
1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0
1 1
.
0 0 1 0 0 0
2 2
1 1 1 1
0 0 0 1
3 6 2 3




Hence
1
1 0 0 0
0 1 0 0
1 1
0 0
2 2
1 1 1 1
3 6 2 3



A .

SECTION 3.3 The Inverse of a Matrix 239
14.
0 1 2 1 1 0 0 0
4 0 1 2 0 1 0 0
0 1 0 0 0 0 1 0
0 2 0 1 0 0 0 1







1 2
R R

4 0 1 2 0 1 0 0
0 1 2 1 1 0 0 0
0 1 0 0 0 0 1 0
0 2 0 1 0 0 0 1









3 2
R R

4 0 1 2 0 1 0 0
0 1 0 1 0 0 1 0
0 1 2 0 1 0 0 0
0 2 0 1 0 0 0 1







*
1 1
*
3 2 3
*
4 2 4
1
4
2
R R
R R R
R R R
=
= +
= +

1 1 1
1 0 0 0 0
4 2 4
0 1 0 0 0 0 1 0
0 0 2 1 1 0 1 0
0 0 0 1 0 0 2 1










*
3 3
1
2
R R =

1 1 1
1 0 0 0 0
4 2 4
0 1 0 0 0 0 1 0
1 1 1
0 0 1 0 0
2 2 2
0 0 0 1 0 0 2 1











*
1 4 1
*
3 4 3
1
2
1
2
R R R
R R R
+
= +

1 1 1
1 0 0 0 1
4 4 2
0 1 0 0 0 0 1 0
1 1 1
0 0 1 0 0
2 2 2
0 0 0 1 0 0 2 1





*
1 3 1
1
4
R R R = +
1 1 7 3
1 0 0 0
8 4 8 8
0 1 0 0 0 0 1 0
1 1 1
0 0 1 0 0
2 2 2
0 0 0 1 0 0 2 1











Hence A
1
=
1 1 7 3
8 4 8 8
0 0 1 0
1 1 1
0
2 2 2
0 0 2 1











240 CHAPTER 3 Linear Algebra
Inverse of the 2 2 Matrix
15. Verify
1 1
= = A A I AA . We have

1
1
0
1 1
0
0
1 1
0
d b a b ad bc
c a c d ad bc ad bc ad bc
a b d b ad bc
c d c a ad bc ad bc ad bc


= = =




= = =



A A I
AA I


Note that we must have 0 ad bc = A .

Brute Force
16. To find the inverse of

1 3
1 2



,
we seek the matrix

a b
c d




that satisfies

1 3 1 0
1 2 0 1
a b
c d

=


.
Multiplying this out we get the equations

1
3 2 0
0
3 2 1.
a b
a b
c d
c d
+ =
+ =
+ =
+ =

The top two equations involve a and b, and the bottom two involve c and d, so we write the two
systems

1 1 1
3 2 0
1 1 0
.
3 2 1
a
b
c
d

=



=




Solving each system, we get

1
1
1 1 1 2 1 1 2
1
3 2 0 3 1 0 3 1
1 1 0 2 1 0 1
1
.
3 2 1 3 1 1 1 1
a
b
c
d


= = =




= = =




Because a and b are the elements in the first row of
1
A , and c and d are the elements in the
second row, we have

1
1
1 3 2 3
1 2 1 1


= =


A .
SECTION 3.3 The Inverse of a Matrix 241
Finding Counterexamples
17. No. Consider
1 0 1 0 0 0
0 1 0 1 0 0

+ =


which is not invertible.

18. No. Consider
0 0 0 0 0 0
0 1 0 1 0 1

=




Unique Inverse
19. We show that if B and C are both inverse of A, then = B C. Because B is an inverse of A, we can
write = BA I . If we now multiply both sides on the right by C, we get
( ) = = BA C IC C.
But then we have
( ) ( ) = = = BA C B AC BI B , so = B C.

Invertible Matrix Method
20. Using the inverse found in Problem 6, yields

1 1
2
5 3 4 50
2 1 10 18
x
x


= = =



A b
,
.

Solution by Invertible Matrix
21. Using the inverse found in Problem 7, yields

1
1
1 0
3
5 5
1 5
2 2 9
2 6
0 14
1 5
3
2 6
x
y
z





= = =





A b
,
.

More Solutions by Invertible Matrices
22. A =
1 1 1
1 1 0
1 2 1





Use row reduction to obtain
1
1 1 1
1 2 1
1 3 2



=



A


1
4 3
1 2
0 1



= =



x A
,


242 CHAPTER 3 Linear Algebra
23.
4 3 2
5 6 0
3 5 2


=



A Use row reduction to obtain


1
3 4 3
5 7 5
2 2 2
7 11 9
4 4 4


A
1
0 40 6 34
10 35 5 30
2 110 18 23
4 4



+


= = =





+

x A
,


Noninvertible 2 2 Matrices
24. If we reduce
a b
c d

=


A to RREF we get

1
0
b
a
ad bc
a





,
which says that the matrix is invertible when 0
ad bc
a

, or equivalently when ad bc .

Matrix Algebra with Inverses
25.
( ) ( )
1 1
1 1 1 1


= = AB B A BA

26.
( ) ( ) ( )
1 1 1
2 2 2 2

= B A B B B A

( ) ( )( )
( ) ( ) ( )
1
1 1 1 1 1
2 2 2
1 1 1 2 2 1
(
where means



= =
= =
B BB AA) B B B A A
B B A B A A A


27. Suppose A(BA)
1
= x b
, ,


( )
( ) ( )
1
1
1 1
(



=

= =
=
x A BA b
BA A b B AA b
Bb
, ,
, ,
,


28.
( )
( )
( ) ( )
( )
1 1
1 1 1
1 1 1 1 1 1



= A BA BA BA A BA
_
_


( )
( )( )
1 1
1 1



=
= =
AB BA A
A B B A A A


Question of Invertibility
29. To solve(A +B) = x b
, ,
requires that A +B be invertible
Then (A +B)
1
(A +B)x
,
=(A +B)
1
b
,

so that x
,
=(A +B)
1
b
,


SECTION 3.3 The Inverse of a Matrix 243
Cancellation Works
30. Given that = AB AC and A are invertible, we premultiply by
1
A , getting

1 1
.

=
=
=
A AB A AC
IB IC
B C


An Inverse
31. If A is an invertible matrix and = AB I , then we can premultiply each side of the equation by
1
A getting

( )
( )
1 1
1 1
1
.

=
=
=
A AB A I
AA B A
B A


Making Invertible Matrices
32.
1 0
0 1 0
0 0 1
k
=1 so k may be any number.

33.
1 0
0 1 0
0 1
k
k
=1 k
2
k 1


Products and Noninvertibility
34. a) Let A and B be n n matrices such that BA =I
n
.
First we will show that A
1
exists by showing Ax
,
=0
,
has a unique solution = x 0
, ,
.

Suppose = Ax 0
, ,

Then = = BAx B0 0
, , ,

so I =
n
x 0
, ,

= x 0
, ,

so that A
1
exists

BA =I
n

BAA
1
=I
n
A
1
so B =A
1
AB =I
n


b)

Let A, B, be n n matrices such that AB is invertible. We will show that A must be
invertible

AB invertible means that AB(AB)
1
=I
n

so that
( )
1
( )

A B AB =I
n

By problem 34a,
( )
1
( )

B AB A =I
n
so that A is invertible.
244 CHAPTER 3 Linear Algebra
Invertiblity of Diagonal Matrices
35. Proof for (): (Contrapositive)
Suppose D is a diagonal matrix with one diagonal element =0, say a
ii
=0. Then D has a row of
zeros and consequently RREF (D) has at least one row of zeros. Therefore, D is not invertible.

Proof for (): Let D be a diagonal matrix such that every a
ii
0. Then the diagonal matrix
B =[b
ii
] such that b
ii
=
1
ii
a
is D
1
.
That is:
1
0
0 1 0
0 1 0 1
0
ii ii
nn
ii
a a
a
a




=





=I
n


Invertiblity of Triangular Matrices
36. Proof for (): (Contrapositive) Let T be an upper triangular matrix with at least one diagonal
element =0, say a
jj
. Then there is one column without a pivot. Therefore RREF (T) has a zero
row.
Consequently T is not invertible.

Proof for (): Let T be an upper n n triangular matrix with no nonzero diagonal elements.
Then every column is a pivot column so RREF(T) =I
n
.
Therefore T is invertible.

Inconsistency
37. If = Ax b
,
,
is inconsistent for some vector b
,
then
1
A does not existbecause if
1
A did exist,
then
1
= x A b
,
,
would exists for all b
,
, which would be a contradiction.

Inverse of an Inverse
38. To prove: If A is invertible, so is A
1

Proof: Let A be an invertible n n matrix, then there exists A
1
so that:
AA
1
=I
n

A
1
A =I
n

so A =(A
1
)
1
by definition of inverse and the fact that inverses are unique.
(3.3 Problem 19)

Inverse of a Transpose
39. To prove: If A is invertible, so is A
T
and (A
T
)
1
=(A
1
)
T
.
Proof: Let A be an invertible n n matrix.
Then (A
T
)(A
1
)
T
=(A
1
A)
T
=
T
n
I =I
n
because (A
T
)
T
=A and (AB)
T
=B
T
A
T

(A
1
)
T
A
T
=(AA
1
)
T
=
T
n
I =I
n

Therefore (A
T
)
1
=(A
1
)
T


Elementary Matrices
40. (a)
0 1 0
1 0 0
0 0 1


=



int
E (b)
1 0 0
0 1 0
0 1 k


=



repl
E (c)
1 0 0
0 0
0 0 1
k


=



scale
E
SECTION 3.3 The Inverse of a Matrix 245
Invertibility of Elementary Matrices
41. Because the inverse of any elementary row operation is also an elementary row operation, and
because elementary matrices are constructed from elementary row operations starting with the
identity matrix, we can convert any elementary row operation to the identity matrix by
elementary row operations.
For example, the inverse of
int
E can be found by performing the operation
1 2
R R on
the augmented matrix

0 1 0 1 0 0 1 0 0 0 1 0
1 0 0 0 1 0 0 1 0 1 0 0
0 0 1 0 0 1 0 0 1 0 0 1


=




int
E I .
Hence,
1
0 1 0
1 0 0
0 0 1



=



int
E . In other words
1
=
int int
E E . We leave finding
1
repl
E and
1
scale
E for the
reader.

Similar Matrices
42. Pick P as the identity matrix.

43. If ~ B A , then there exists a nonsingular matrix P such that
1
= B P AP . Premultiplying by P and
postmultiplying by
1
P gives
( )
1
1 1 1


= = A PBP P BP ,
which shows that A is similar to B.

44. Suppose ~ C A and ~ C B . Then there exist invertible matrices
A
P and
B
P
1 1
= =
A A B B
C P AP P BP
so

( ) ( ) ( )
1 1 1 1
= =
A B B A A B B A
A P P BP P P P B P P .
Let
1
=
B A
Q P P .

Therefore
1
= A Q BQ, so ~ A B .

45. Informal Discussion
B
n
=(P
1
AP)(P
1
AP) (P
1
AP)

n factors
By generous application of the associative property of matrix multiplication we obtain.
B
n
=P
1
A(PP
1
) A(PP
1
) (PP
1
)AP
=P
1
A
n
P by the facts that PP
1
=I and AI =A

Induction Proof
To Prove: B
n
=P
1
A
n
P for all positive integers n
Pf: 1) B
1
=P
1
AP by definition of B
2) Assume for some k: B
k
=P
1
A
k
P
246 CHAPTER 3 Linear Algebra
Now for k +1:
B
k+1
=BB
k
=(P
1
AP)(P
1
A
k
P)
=(P
1
A)(PP
1
)(A
k
P)
=(P
1
A)I(A
k
P)
= P
1
AA
k
P =P
1
A
k+1
P
So the case for k the case for k +1
By Mathematical Induction, B
n
=P
1
A
n
P for all n.

46. True/False Questions
a) True If all diagonal elements are nonzero, then every column has a pivot and the
matrix is invertible. If a diagonal element is zero, then the corresponding column
is not a pivot column, so the matrix is not invertible.

b) True Same argument as a)

c) False Consider this example:
A =
1 0
0 2



B =
0 1
0 0




A
1
=
1 0
1
0
2






1 0
1 0 0 1
1
0 2 0 0 0
2







=
1
0
2
0 0




B

Leontief Model
47.
0.5 0
0 0.5

=


T ,
10
10

=


d
,

The basic equation is
Total Output = External Demand + Internal Demand,
so we have

1 1
2 2
10 0.5 0
10 0 0.5
x x
x x

= +


.
Solving these equations yields
1 2
20 x x = = . This should be obvious because for every 20 units of
product each industry produces, 10 goes back into the industry to produce the other 10.

48.
0 0.1
0.2 0

=


T ,
10
10

=


d
,

The basic equation is
Total Output = External Demand + Internal Demand,
so we have

1 1
2 2
10 0 0.1
10 0.2 0
x x
x x

= +


.
Solving these equations yields
1
11.2 x = ,
2
12.2 x = .
SECTION 3.3 The Inverse of a Matrix 247
49.
0.2 0.5
0.5 0.2

=


T ,
10
10

=


d
,

The basic equation is
Total Output = External Demand + Internal Demand,
so we have

1 1
2 2
10 0.2 0.5
10 0.5 0.2
x x
x x

= +


.
Solving these equations yields
1
1
33
3
x = ,
2
1
33
3
x = .

50.
0.5 0.2
0.1 0.3

=


T ,
50
50

=


d
,

The basic equation is
Total Output = External Demand + Internal Demand,
so we have

1 1
2 2
50 0.5 0.2
50 0.1 0.3
x x
x x

= +


.
Solving these equations yields
1
136.4 x = ,
2
90.9 x = .

How Much Is Left Over?
51. The basic demand equation is
Total Output = External Demand + Internal Demand,
so we have

1
2
150 0.3 0.4 150
250 0.5 0.3 250
d
d

= +


.
Solving for
1
d ,
2
d yields
1
5 d = ,
2
100 d = .

Israeli Economy
52. (a)
0.70 0.00 0.00
0.10 0.80 0.20
0.05 0.01 0.98


=



I T (b) ( )
1
1.43 0.00 0.00
0.20 1.25 0.26
0.07 0.01 1.02



=



I T
(c) ( )
1
1.43 0.00 0.00 140,000 $200,200
0.20 1.25 0.26 20,000 $53,520
0.07 0.01 1.02 2,000 $12,040



= = =



x I T d
,


Suggested Journal Entry
53. Student Project

248 CHAPTER 3 Linear Algebra


3.4

Determinants and Cramers Rule



Calculating Determinants
1. Expanding by cofactors down the first column we get

0 7 9
1 1 7 9 7 9
2 1 1 0 2 5 0
6 2 6 2 1 1
5 6 2

= + =

.

2. Expanding by cofactors across the middle row we get

1 2 3
2 3 1 3 1 2
0 1 0 0 1 0 6
0 3 1 3 1 0
1 0 3
= + =

.

3. Expanding by cofactors down the third column we get

1 3 0 2
1 3 2 1 3 2
0 1 1 5
1 1 2 7 0 1 5 6 6 12
1 2 1 7
1 1 6 1 1 6
1 1 0 6

= + = + =

.

4. Expanding by cofactors across the third row we get
( ) ( )
1 4 2 2
4 2 2 1 4 2
4 7 3 5
3 7 3 5 8 4 7 5 3 14 8 250 2042
3 0 8 0
1 6 9 5 1 9
5 1 6 9

= + = + =


.

5. By row reduction, we can write

1 1 1 1 1 1
2
2 2 2 2 2 2
3
3 3 3 2 2 2
= =0

6.
0 0 1
0 2
0 2 1 1
3 1
3 1 1
= =6
SECTION 3.4 Determinants and Cramers Rule 249
7. Using row reduction

1 2 2 4
2 2 2 2
2 1 1 2
1 4 4 2



1 2 2 4
0 3 3 0
2 1 1 2
0 2 6 6


*
2 3 2
*
4 1 4
by row operations:
R R R
R R R
= +
= +



1 2 2 4
0 3 3 0
0 3 5 10
0 2 6 6


*
3 1 3
by row operation:
2 R R R = +



3 3 0
1 3 5 10
2 6 6




5 10 3 10
3 ( 3)
6 6 2 6

=

=24

Find the Properties
8. Subtract the first row from the second row in the matrix in the first determinant to get the matrix
in the second determinant.

9. Factor out 3 from the second row of the matrix in the first determinant to get the matrix in the
second determinant.

10. Interchange the two rows of the matrix.

Basketweave for 3 3
11. Direct computation as in Problems 14.

12.
0 7 9
2 1 1 0 35 108 45 0 28 0
5 6 2
= + =

13.
1 2 3
0 1 0 3 0 0 3 0 0 6
1 0 3
= + + =



250 CHAPTER 3 Linear Algebra
14. By an extended basketweave hypothesis,

0 1 1 0
1 1 0 1
0 0 0 0 0 0 1 0 1
0 0 0 1
0 1 1 0
= + + + = .
However, the determinant is clearly 0 (because rows 1 equals row 4), so the basketweave method
does not generalize to dimensions higher than 3.

Triangular Determinants
15. We verify this for 4 4 matrices. Higher-order matrices follow along the same lines. Given the
upper-triangular matrix

11 12 13 14
22 23 24
33 34
44
0
0 0
0 0 0
a a a a
a a a
a a
a
= A ,
we expand down the first column, getting

11 12 13 14
22 23 24
22 23 24 33 34
11 33 34 11 22 11 22 33 44
33 34 44
44
44
0
0
0 0 0
0 0
0 0 0
a a a a
a a a
a a a a a
a a a a a a a a a
a a a
a
a
= = = .

Think Diagonal
16. The matrix is upper triangular, hence the determinant is the product of the diagonal elements
( )( )( )
3 4 0
0 7 6 3 7 5 105
0 0 5

= = .

17. The matrix is a diagonal matrix, hence the determinant is the product of the diagonal elements.
( )( )
4 0 0
1
0 3 0 4 3 6
2
1
0 0
2
= = .

18. The matrix is lower triangular, hence the determinant is the product of the diagonal elements.
( )( )( )( )
1 0 0 0
3 4 0 0
1 4 1 2 8
0 5 1 0
11 0 2 2

= =

.




SECTION 3.4 Determinants and Cramers Rule 251
19. The matrix is upper triangular, hence the determinant is the product of the diagonal elements.
( )( )( )( )
6 22 0 3
0 1 0 4
6 1 13 4 312
0 0 13 0
0 0 0 4

= = .

Invertibility
20. Not invertible if

3
1 0
0 1 4 0
0 4
k
k k k
k
= = if k(4 k
2
) =0, so that k =0 or k =2
Invertible if k 0 and k 2

21. Not invertible if
1
0
k
k k
=


k +k
2
=0
k(k 1) =0
Invertible if k 0 and k 1

22. Not invertible if

1 0
0 1 0 1 0
0 1
m
km
k
= = i.e. km =1, k =
1
m

Invertible if km 1

Invertibility Test
23. The matrix does not have an inverse because its determinant is zero.

24. The matrix has an inverse because its determinant is nonzero.

25. The matrix has an inverse because its determinant is nonzero.

26. The matrix has an inverse because its determinant is nonzero.

252 CHAPTER 3 Linear Algebra
Product Verification

27.

1 2
3 4
1 0
1 1
3 2
7 4
1 2
2
3 4
1 0
1
1 1
3 2
2
7 4

=



=



=



= =



= =



= =


A
B
AB
A
B
AB

Hence = AB A B .

28.
0 1 0
1 0 0 2
1 2 2
1 2 3
1 2 0 7
0 1 1
1 2 0
1 2 3 14
1 8 1


= =





= =





= =



A A
B B
AB AB

Hence = AB A B .

Determinant of an Inverse
29. We have

1 1
1

= = = I AA A A
and hence
1
1

= A
A
.

Do Determinants Commute?
30. = = = AB A B B A BA ,
because A B is a product of real or complex numbers.

Determinant of Similar Matrices
31. The key to the proof lies in the determinant of a product of matrices. If

1
= A P BP ,
we use the general properties

1
1

= A
A
, = AB A B
shown in Problems 23 and 24, and write

1 1
1 1

= = = = = A P BP P B P B P B P B
P P
.

SECTION 3.4 Determinants and Cramers Rule 253
Determinant of
n
A
32. (a) If 0
n
= A for some integer n, we have
0
n
n
= = A A .
Because
n
A is the product of real or complex numbers,
0 = A .
Hence, A is noninvertible.

(b) If 0
n
A for some integer n, then
0
n
n
= A A
for some integer n. This implies 0 A , so A is invertible. In other words, for every
matrix A either 0
n
= A for all positive integers n or it is never zero.

Determinants of Sums
33. An example is

1 0
0 1

=


A ,
1 0
0 1

=


B ,
so

0 0
0 0

+ =


A B ,
which has the determinant
0 + = A B ,
whereas
1 = = A B ,
so 2 + = A B . Hence,
+ + A B A B .

Determinants of Sums Again
34. Letting

1 1
0 0

=


A ,
1 1
0 0

=


B ,
we get

0 0
0 0

+ =


A B .
Thus 0 + = A B . Also, we have 0 = A , 0 = B , so
0 + = A B .
Hence,
+ = + A B A B .



254 CHAPTER 3 Linear Algebra
Scalar Multiplication
35. For a 2 2 matrix, we see

11 12 11 12 2 2 2
11 22 21 12
21 22 21 22
ka ka a a
k a a k a a k
ka ka a a
= = .
For an n n matrix, A, we can factor a k out of each row getting
n
k k = A A .

Inversion by Determinants
36. Given the matrix

1 0 2
2 2 3
1 1 1


=



A
the matrix of minors can easily be computed and is

1 1 0
2 1 1
4 1 2


=



M .
The matrix of cofactors A

, which we get by multiplying the minors by ( ) 1


i j +
, is given by
( )
1 1 0
1 2 1 1
4 1 2
i j +


= =



A M

.
Taking the transpose of this matrix gives

T
1 2 4
1 1 1
0 1 2


=



A

.
Computing the determinant of A, we get 1 = A . Hence, we have the inverse

1 T
1 2 4
1
1 1 1
0 1 2



= =



A A
A

.

Determinants of Elementary Matrices
37. (a) If we interchange the rows of the 2 2 identity matrix, we change the sign of the
determinant because

1 0
1
0 1
= ,
0 1
1
1 0
= .
For a 3 3 matrix if we interchange the first and second rows, we get

0 1 0
1 0 0 1
0 0 1
= .
You can verify yourself that if any two rows of the 3 3 identity matrix are interchanged,
the determinant is 1.
SECTION 3.4 Determinants and Cramers Rule 255
For a 4 4 matrix suppose the ith and jth rows are interchanged and that we
compute the determinant by expanding by minors across one of the rows that was not
interchanged. (We can always do this.) The determinant is then

11 11 12 12 13 13 14 14
a a a a = + A M M M M .
But the minors
11
M ,
12
M ,
13
M ,
14
M are 3 3 matrices, and we know each of these
determinants is 1 because each of these matrices is a 3 3 elementary matrix with two
rows changed from the identity matrix. Hence, we know 4 4 matrices with two rows
interchanged from the identity matrix have determinant 1. The idea is to proceed
inductively from 4 4 matrices to 5 5 matrices and so on.

(b) The matrix

1 0 0
1 0
0 0 1
k






shows what happens to the 3 3 identity matrix if we add k times the 1st row to the 2nd
row. If we expand this matrix by minors across any row we see that the determinant is the
product of the diagonal elements and hence 1. For the general n n matrix adding k
times the ith row to the jth row places a k in the jith position of the matrix with all other
entries looking like the identity matrix. This matrix is an upper-triangular matrix, and its
determinant is the product of elements on the diagonal or 1.

(c) Multiplying a row, say the first row, by k of a 3 3 matrix

0 0
0 1 0
0 0 1
k





and expanding by minors across any row will give a determinant of k. Higher-order
matrices give the same result.

Determinant of a Product
38. (a) If A is not invertible then 0 = A . If A is not invertible then neither is AB, so 0 = AB .
Hence, it yields = AB A B because both sides of the equation are zero.

(b) We first show that = EA E A for elementary matrices E. An elementary matrix is one
that results in changing the identity matrix using one of the three elementary operations.
There are three kinds of elementary matrices. In the case when E results in multiplying a
row of the identity matrix I by a constant k, we have:

11 12 1 11 12 1
21 22 2 21 22 2
1 2 1 2
1 0 0
0 0
0 0 1
n n
n n
n n nn n n nn
a a a a a a
a a a ka ka ka k
a a a a a a
k



= =



= =
EA
A E A





In those cases when E is a result of interchanging two rows of the identity or by adding a
multiple of one row to another row, the verification follows along the same lines.

256 CHAPTER 3 Linear Algebra
Now if A is invertible it can be written as the product of elementary matrices

1 1 p p
= A E E E .
If we postmultiply this equation by B, we get

1 1

p p
= AB E E E B ,
so

1 1 1 1

p p p p
= = = AB E E E B E E E B A B .

Cramers Rule
39. 2 2
2 5 0
x y
x y
+ =
+ =

To solve this system we write it in matrix form as

1 2 2
2 5 0
x
y

=


.

Using Cramers rule, we compute the determinants
1 2
1 2 2 2 1 2
1, 10, 4.
2 5 0 5 2 0
= = = = = = A A A

Hence, the solution is

1 2
10 4
10, 4.
1 1
x y = = = = = =
A A
A A


40.
2 1
x y
x y
+ =
+ =

To solve this system we write it in matrix form as

1 1
1 2 1
x
y

=


.

Using Cramers rule, we compute the determinants

1 2
1 1 1 1
1, 2 1, 1 .
1 2 1 2 1 1

= = = = = = A A A

Hence, the solution is

1 2
2 1 1
2 1, 1 .
1 1
x y



= = = = = =
A A
A A







SECTION 3.4 Determinants and Cramers Rule 257
41. 3 5
2 5 7
2 3
x y z
y z
x z
+ + =
+ =
+ =

To solve this system, we write it in matrix form as

1 1 3 5
0 2 5 7
1 0 2 3
x
y
z


=



.

Using Cramers rule, we compute the determinants

1 2 3
1 1 3 5 1 3 1 5 3 1 1 5
0 2 5 3, 7 2 5 3, 0 7 5 3, 0 2 7 3.
1 0 2 3 0 2 1 3 2 1 0 3
= = = = = = = = A A A A

All determinants are 3, so

1 2 3
3 3 3
1, 1, 1.
3 3 3
x y z = = = = = = = = =
A A A
A A A


42.
1 2 3
1 2 3
1 2 3
2 6
3 8 9 10
2 2 2
x x x
x x x
x x x
+ =
+ + =
+ =

To solve this system, we write it in matrix form as

1
2
3
1 2 1 6
3 8 9 10
2 1 2 2
x
x
x


=



.
Using Cramers rule, we compute the determinants

1 2 3
1 2 1 6 2 1 1 6 1 1 2 6
3 8 9 68, 10 8 9 68, 3 10 9 136, 3 8 10 68.
2 1 2 2 1 2 2 2 2 2 1 2

= = = = = = = =

A A A A

Hence, the solution is

1 2 3
1 2 3
68 136 68
1, 2, 1.
68 68 68
x x x = = = = = = = = =
A A A
A A A


The Wheatstone Bridge
43. (a) Each equation represents the fact that the sum of the currents into the respective nodes A,
B, C, and D is zero. For example

1 2 1 2
1 1
3 3
2 3 3 2
0
0
0
0 .
g x g x
x x
g g
I I I I I I
I I I I I I
I I I I I I
I I I I I I
= = +
= = +
+ + = = +
+ = = +
node :
node :
node :
node :
A
B
C
D


258 CHAPTER 3 Linear Algebra
(b) If a current I flows through a resistance R, then the voltage drop across the resistance is
RI. Applying Kirchhoffs voltage law, the sum of the voltage drops around each of the
three circuits is set to zero giving the desired three equations:
voltage drop around the large circuit
0 1 1
0
x x
E R I R I = ,
voltage drop around the upper-left circuit
1 1 2 2
0
g g
R I R I R I + = ,
voltage drop around the upper-right circuit
3 3
0
x x g g
R I R I R I = .

(c) Using the results from part (a) and writing the three currents
3
I ,
x
I , and I in terms of
1
I ,
2
I ,
g
I . gives

3 2
1
1 2
.
g
x g
I I I
I I I
I I I
= +
=
= +

We substitute these into the three given equations to obtain the 3 3 linear system for the
currents
1
I ,
2
I ,
g
I :

3 3 1
1 2 2
1 0
0
0
0
x x g
g
x x g
R R R R R I
R R R I
R R R I E



=



+


.
Solving for
g
I (we only need to solve for one of the three unknowns) using Cramers
rule, we find

1
g
I =
A
A

where
( )
3
1 1 2 0 2 1 3
1 0
0
0
0
x
x
x
R R
R R E R R R R
R R E


= = +

+

A .
Hence, 0
g
I = if
2 1 3 x
R R R R = . Note: The proof of this result is much easier if we assume
the resistance
g
R is negligible, and we take it as zero.

Least Squares Derivation
44. Starting with
( ) ( )
2
1
,
n
i i
i
F m k y k mx
=
= +

,
we compute the equations
0
F
k

, 0
F
m


yielding

( ) ( ) ( )
( ) ( ) ( )
2
1 1
2
1 1
2 1 0
2 0.
n n
i i i i
i i
n n
i i i i i
i i
F
y k mx y k mx
k k
F
y k mx y k mx x
m m
= =
= =

= + = + =



= + = + =






SECTION 3.4 Determinants and Cramers Rule 259
Carrying out a little algebra, we get

1 1
2
1 1 1
n n
i i
i i
n n n
i i i i
i i i
kn m x y
k x m x x y
= =
= = =
+ =
+ =



or in matrix form

1 1
2
1 1 1
n n
i i
i i
n n n
i i i i
i i i
n x y
k
m
x x x y
= =
= = =




=







.

Alternative Derivation of Least Squares Equations
45. (a) Equation (9) in the text

1.7 1.1
2.3 3.1
3.1 2.3
4.0 3.8
k m
k m
k m
k m
+ =
+ =
+ =
+ =

can be written in matrix form

1 1.7 1.1
1 2.3 3.1
1 3.1 2.3
1 4.0 3.8
k
m




=






which is the form of = Ax b

.

(b) Given the matrix equation = Ax b

, where

1
2
3
4
1
1
1
1
x
x
x
x



=



A ,
k
m

=


x

,
1
2
3
4
y
y
y
y



=



b


if we premultiply each side of the equation by
T
A , we get
T T
= A Ax A b

, or

1 1
2 2
1 2 3 4 1 2 3 4 3 3
4 4
1
1 1 1 1 1 1 1 1 1
1
1
x y
x y k
x x x x x x x x x y m
x y




=






or

4 4
1 1
4 4 4
2
1 1 1
4
i i
i i
i i i i
i i i
x y
k
m
x x x y
= =
= = =




=







.

260 CHAPTER 3 Linear Algebra
Least Squares Calculation
46. Here we are given the data points
x y
0 1
1 1
2 3
3 3

so
4
1
4
2
1
4
1
4
1
6
14
8
16.
i
i
i
i
i
i
i i
i
x
x
y
x y
=
=
=
=
=
=
=
=


The constants m, k in the least squares line
y mx k = +
satisfy the equations
4 6 8
6 14 16
k
m

=


,
which yields 0.80 k m = = . The least squares line
is 0.8 0.8 y x = + .

4 2
2
1
3
4
1
3
y x = +
08 08 . .
x
y


Computer or Calculator
47. To find the least-squares approximation of the form y k mx = + , we solve to a set of data points
( ) { }
, : 1, 2, ,
i i
x y i n = , to get the system

1 1
2
1 1 1
n n
i i
i i
n n n
i i i i
i i i
n x y
k
m
x x x y
= =
= = =




=







.
Using a spreadsheet to compute the element of the coefficient matrix and the right-hand-side
vector, we get
Spreadsheet to compute least squares
x y x^2 xy
1.6 1.7 2.56 2.72
3.2 5.3 10.24 16.96
6.9 5.1 47.61 35.19
8.4 6.5 70.56 54.60
9.1 8.0 82.81 72.80

sum x sum y sum x^2 sum xy
29.2 26.6 213.78 182.27

SECTION 3.4 Determinants and Cramers Rule 261
We must solve the system
5.0 29.20 26.60
29.2 213.78 182.27
k
m

=



getting 1.68 k = , 0.62 m = . Hence, we have the
least squares line
0.62 1.68 y x = + ,
whose graph is shown next.

8 4
4
2
6
8
2
6
x
y
10
10
y x = +
062 168 . .


48. To find the least-square approximation of the form y k mx = + , we solve to a set of data points
( ) { }
, : 1, 2, ,
i i
x y i n = to get the system

1 1
2
1 1 1
n n
i i
i i
n n n
i i i i
i i i
n x y
k
m
x x x y
= =
= = =




=







.
Using a spreadsheet to compute the elements of the coefficient matrix and the right-hand-side
vector, we get
Spreadsheet to compute least squares
x y x^2 xy
0.91 1.35 0.8281 1.2285
1.07 1.96 1.1449 2.0972
2.56 3.13 6.5536 8.0128
4.11 5.72 16.8921 23.5092
5.34 7.08 28.5156 37.8072
6.25 8.14 39.0625 50.8750

sum x sum y sum x^2 sum xy
20.24 27.38 92.9968 123.5299

We must solve the system
6.00 20.2400 27.3800
20.24 92.9968 123.5299
k
m

=



getting 0.309 k = , 1.26 m = . Hence, the least-
squares line is 0.309 1.26 y x = + .

5
5
x
y
10
10
y x = +
126 0309 . .


262 CHAPTER 3 Linear Algebra
Least Squares in Another Dimension
49. We seek the constants ,
1
, and
2
that minimize
( ) ( )
2
1 2 1 2
1
, ,
n
i i i
i
F y T P
=
= + +

.
We write the equations

( ) ( )
( ) ( )
( ) ( )
1 2
1
1 2
1 1
1 2
1 2
2 1 0
2 0
2 0.
n
i i i
i
n
i i i i
i
n
i i i i
i
F
y T P
F
y T P T
F
y T P P

=
=
=

= + + =

= + + =

= + + =


Simplifying, we get

1 1 1
2
1
1 1 1 1
2
2
1 1 1 1
n n n
i i i
i i i
n n n n
i i i i i i
i i i i
n n n n
i i i i i i
i i i i
n T P y
T T T P T y
P T P P Py

= = =
= = = =
= = = =






=












Solving for ,
1
, and
2
, we get the least-squares plane

1 2
y T P = + + .

Least Squares System Solution
50. Premultiplying each side of the system = Ax b by
T
A gives
T T
= A Ax A b , or

1 1 1
1 0 1 1 0 1
0 1 2
1 1 1 1 1 1
1 1 1
x
y



=






or simply
2 0 0
0 3 4
x
y

=


.
Solving this 2 2 system, gives
0 x = ,
4
3
y = ,
which is the least squares approximation to the
original system.

2
1
3
1
3
2
2
2
least squares
solution
y
=
2
0 4 3 , a f
+ = x y 1
x y
+ =
1
y
x


Suggested Journal Entry
51. Student Project

SECTION 3.5 Vector Spaces and Subspaces 263


3.5

Vector Spaces and Subspaces



They Dont All Look Like Vectors
1. A typical vector is [ ] , x y , with negative [ ] , x y ; the zero vector is [ ] 0, 0 .

2. A typical vector is [ ] , , x y z , with negative [ ] , , x y z ; the zero vector is [ ] 0, 0, 0 .

3. A typical vector is [ ] , , , a b c d , with negative [ ] , , , a b c d ; the zero vector is [ ] 0, 0, 0, 0 .

4. A typical vector is [ ] , , a b c , with negative [ ] , , a b c ; the zero vector is [ ] 0, 0, 0 .

5. A typical vector is
a b c
d e f



, with negative
a b c
d e f




; the zero vector is
0 0 0
0 0 0



.

6. A typical vector is
a b c
d e f
g h i





, with negative
a b c
d e f
g h i






; the zero vector is
0 0 0
0 0 0
0 0 0





.

7. A typical vector is a linear function ( ) p t at b = + , the zero vector is 0 p and the negative of
( ) p t is ( ) p t .

8. A typical vector is a quadratic function
( )
2
p t at bt c = + + ,
the zero vector is 0 p and the negative of ( ) p t
is ( ) p t .

4 4
2
2
y
t
4
4
2 2
p t
1
( )
p t
3
( )
p t
2
( )
p t
4
( )

Segments of typical vectors in
2
P
9. A typical vector is a continuous and differenti-
able function, such as ( ) sin f t t = , ( )
2
g t t = .
The zero vector is ( )
0
0 f t and the negative of
( ) f t is ( ) f t .

4 4
2
2
y
x
4
4
2 2
g t ( )
f t ( )


264 CHAPTER 3 Linear Algebra
10.
[ ]
2
0, 1 e : Typical vectors are continuous and
twice differentiable functions such as
( ) sin f t t = , ( )
2
2 g t t t = + ,
and so on. The zero vector is the zero function
( )
0
0 f t , and the negative of a typical vector,
say ( ) sin
t
h t e t = , is ( ) sin
t
h t e t = .

4
y
t
8
4
8
g t ( )
f t ( )
4 4
h t ( )


Are They Vector Spaces?
11. Not a vector space; there is no additive inverse.

12. First octant of space: No, the vectors have no negatives. For example, [ ] 1, 3, 3 belongs to the set
but [ ] 1, 3, 3 does not.

13. Not a vector space; e.g., the negative of [ ] 2,1 does not lie in the set.

14. Not a vector space; e.g.,
2
x x + and ( )
2
1 x each belongs, but their sum ( )
2 2
1 x x x x + + = does
not.

15. Not a vector space since it is not closed under vector addition. See the example for Problem 14.

16. Yes, the vector space of all diagonal 2 2 matrices.

17. Not a vector space; the set of 2 2 matrices with zero deteriminant is not closed under vector
addition as indicated by

1 0 0 1 1 1
1 0 0 3 1 3

+ =


.

18. Not a vector space; the set of all 2 2 invertible matrices is not closed under vector addition. For
instance,

1 0 1 0 0 0
0 1 0 1 0 0

+ =


.

19. Yes, the vector space of all 3 3 upper-triangular matrices.

20. Not a vector space, does not contain the zero function.

21. Not a vector space; not closed under scalar multiplication; no additive inverse.

22. Yes, the vector space of all differentiable functions on ( ) , .

23. Yes, the vector space of all integrable functions on [ ] 0,1 .
SECTION 3.5 Vector Spaces and Subspaces 265
A Familiar Vector Space
24. Yes a vector space. Straightforward verification of the 10 commandments of a vector space; that
is, the sum of two vectors (real numbers in this case) is a vector (another real number), the
product of a real number by a scalar (another real number) is a real number. The zero vector is the
number 0. Every number has a negative. The distributivity and associatively properties are simply
properties of the real numbers, and so on.

Not a Vector Space
25. Not a vector space; not closed under scalar multiplication.

DE Solution Space
26. Properties A3, A4, S1, S2, S3, and S4 are basic properties that hold for all functions; in particular,
solutions of a differential equation.

Another Solution Space
27. Yes, the solution space of the linear homogeneous DE
( ) ( )( ) ( )( ) ( ) ( ) 0 y p t y q t y y p t y q t y

+ + = + + =


is indeed a vector space; the linearity properties are sufficient to prove all the vector space
properties.

The Space ( ) , C
28. This result follows from basic properties of continuous functions; the sum of continuous func-
tions is continuous, scalar multiples of continuous functions are continuous, the zero function is
continuous, the negative of a continuous function is continuous, the distributive properties hold
for all functions, and so on.

Vector Space Properties
29. Unique Zero: We prove that if a vector z

satisfies + = v z v

for any vector v

, then = z 0

. We
can write
( ) ( ) ( ) ( ) ( ) ( ) ( ) = + = + + = + + = + + = + = z z 0 z v v z v v v z v v v 0


.
30. Unique Negative: We show that if v

is an arbitrary vector in some vector space, then there is


only one vector n

(which we call v

) in that space that satisfies + = v n 0



. Suppose another
vector n*

also satisfies + = v n* 0


. Then

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
.
= + = + + = + + = + = + + = + +
= + =
n n 0 n v v n v v 0 v v n* v v v n*
0 n* n*




31. Zero as Multiplier: We can write
( ) 0 1 0 1 0 1 + = + = + = = v v v v v v v

.
Hence, by the result of Problem 30, we can conclude that 0 = v 0

.

266 CHAPTER 3 Linear Algebra
32. Negatives as Multiples: From Problem 30, we know that v

is the only vector that satisfies


( ) + = v v 0


. Hence, if we write
( ) ( ) ( ) ( )
1 1 1 1 1 0 + = + = + = = v v v v v v 0


.
Hence, we conclude that ( ) 1 = v v

.

A Vector Space Equation
33. Let v

be an arbitrary vector and c an arbitrary scalar. Set c = v 0

. Then either 0 c = or = v 0

. For
0 c ,
( ) ( )
1 1
1 c
c c
= = = = v v v 0 0


,
which proves the result.

Nonstandard Definitions
34. ( ) ( ) ( )
1 1 2 2 1 2
, , , 0 x y x y x x + + and ( ) ( ) , , c x y cx y
All vector space properties clearly hold for these operations. The set
2
R with indicated vector
addition and scalar multiplication is a vector space.

35. ( ) ( ) ( )
1 1 2 2 2
, , 0, x y x y x + and ( ) ( ) , , c x y cx cy
Not a vector space because, for example, the new vector addition is not commutative:
( ) ( ) ( ) 2, 3 4, 5 0, 4 + = , ( ) ( ) ( ) 4, 5 2, 3 0, 2 + = .

36. ( ) ( ) ( )
1 1 2 2 1 2 1 2
, , , x y x y x x y y + + + and ( )
( )
, , c x y cx c y
Not a vector space, for example,
( ) c d c d + + x x x

.
For 4 c = , 9 d = and vector ( )
1 2
, x x = x

, we have

( ) ( )
( )
( ) ( ) ( ) ( ) ( )
1 2 1 2
1 2 1 2 1 2 1 2 1 2
13 , 13 , 13
4 , 9 , 2 , 2 3 , 3 5 , 5 .
c d x x x x
c d x x x x x x x x x x
+ = =
+ = + = + =
x
x x




Sifting Subsets for Subspaces
37. ( ) { }
, 0 x y y = = W is a subspace of
2
R .
38. ( )
{ }
2 2
, 1 x y x y = + = W is not a subspace of
2
R because it does not contain the zero vector
(0, 0). It is also not closed under vector addition and scalar multiplication.

39. ( ) { }
1 2 3 3
, , 0 x x x x = = W is a subspace of
3
R .

40. ( ) ( ) { }
degree 2 p t p t = = W is not a subspace of
2
P because it does not contain the zero vector
( ) 0 p t .

41. ( ) ( ) { }
0 0 p t p = = W is a subspace of
3
P .

SECTION 3.5 Vector Spaces and Subspaces 267
42. ( ) ( ) { }
0 0 f t f = = W is a subspace of [ ] 0, 1 C .

43. ( ) ( ) ( ) { }
0 1 0 f t f f = = = W is a subspace of [ ] 0, 1 C .

44. ( ) ( )
{ }
0
b
a
f t f t dt = =

W is a subspace of [ ] , a b C .

45. ( ) { }
0 f t f f = + = W is a subspace of [ ]
2
0, 1 C .

46.
{ }
( ) 1 f t f f = + = W is not a subspace of [ ]
2
0, 1 C . It does not contain the zero vector
( ) 0 y t . It is also not closed under vector addition and scalar multiplication because the sum of
two solutions is not necessarily a solution. For example,
1
1 sin y t = + and
2
1 cos y t = + are both
solutions, but the sum

1 2
2 sin cos y y t t + = + +
is not a solution. Likewise
1
2 2 2sin y t = + is not a solution.

47. Not a subspace because = x 0

W

48. W is a subspace:
Nonempty: Note that = A0 0

so 0

W
Closure: Suppose , x y

W
= Ax 0

and = Ay 0

,
so ) + A(ax by


= ) ( ) + A(ax A by


= + = + = + = aAx bAy a0 b0 0 0 0



Hyperplanes as Subspaces
49. We select two arbitrary vectors
[ ]
1 1 1 1
, , , x y z w = u

, [ ]
2 2 2 2
, , , x y z w = v


from the subset W. Hence, we have

1 1 1 1
2 2 2 2
0
0.
ax by cz dw
ax by cz dw
+ + + =
+ + + =

Adding, we get

( ) ( ) ( ) ( ) ( ) ( )
1 2 1 2 1 2 1 2 1 1 1 1 2 2 2 2
0
a x x b y y c z z d w w ax by cz dw ax by cz dw + + + + + + + = + + + + + + +
=

which says that + u v

W. To show k u

W, we must show that the scalar multiple


[ ]
1 1 1 1
, , , k kx ky kz kw = u


satisfies

1 1 1 1
0 akx bky ckz dkw + + + = .
But this follows from
( )
1 1 1 1 1 1 1 1
0 akx bky ckz dkw k ax by cz dw + + + = + + + = .
268 CHAPTER 3 Linear Algebra
Are They Subspaces of R?
50. W = {[ , , , ]: , } a b a b a b a b + is a subspace.
Nonempty: Let a =b =0 Then (0, 0, 0, 0) W
Closure: Suppose [ ] [ ]
2 2 2 2 2 2
, , , and , , , a b a b a b a b a b a b = + = + x y

W


[ ] [ ]
[ ]
[ ]
1 1 1 1 1 1 2 2 2 2 2 2
1 2 1 2 1 1 2 2 1 1 2 2
1 2 1 2 1 2 1 2 1 2 1 2
Then , , ( ), ( ) , , ( ), ( )
, , ( ) ( ), ( ) ( )
, ,( ) ( ),( ) ( )
for any , ,
k ka kb k a b k a b a b a b a b
ka a kb b k a b a b k a b a b
ka a kb b ka a kb b ka a kb b
k
+ = + + +
= + + + + + + +
= + + + + + + +

x y




W


51. No [0, 0, 0, 0, 0] {[a, 0, b, 1, c]: a, b, c }
because the 4
th
coordinate 0 for all a, b, c

52. No For [a, b, a
2
, b
2
], the last two coordinates are not linear functions of a and b.
Consider [1, 3, 1, 9] Note that 2[1, 3, 1, 9] is not in the subset.
i.e., 2[1, 3, 1, 9] =[2, 6, 2, 18] [2 1, 2 3, (2 1)
2
, (2 3)
2
] =[2, 3, 4, 36]

Differentiable Subspaces
53. ( ) { }
0 f t f = . It is a subspace.

54. ( ) { }
1 f t f = . It is not a subspace, because it does not contain the zero vector and is not closed
under vector addition. Hence, ( ) f t t = , ( ) 2 g t t = + belongs to the subset but ( )( ) f g t + does
not belong. It is also not closed under scalar multiplication. For example ( ) f t t = belongs to the
subset, but ( ) 2 2 f t t = does not.

55. ( ) { }
f t f f = . It is a subspace.
56. ( )
{ }
2
f t f f = . It is not a subspace; e.g., not closed under scalar multiplication. ( f may satisfy
equation
2
f f = , but 2f will not, since
2
2 4 f f .

Property Failures
57. The first quadrant (including the coordinate axes) is closed under vector addition, but not scalar
multiplications.

58. An example of a set in
2
R that is closed under scalar multiplication but not under vector addition
is that of two different lines passing through the origin.

59. The unit circle is not closed under either vector addition or scalar multiplication.

SECTION 3.5 Vector Spaces and Subspaces 269
Solution Spaces of Homogenous Linear Algebraic Systems
60. x
1
x
2
+4x
4
+2x
5
x
6
=0
2x
1
2x
2
+x
3
+2x
4
+4x
5
x
6
=0
The matrix of coefficients A =
1 1 0 4 2 1
2 2 1 2 4 1



has RREF =
1 1 0 4 2 1
0 0 1 6 0 1


x
1
x
2
+4x
4
+2x
5
x
6
=0
x
3
6x
4
+x
6
=0
Let x
2
=r, x
4
=s, x
5
=t, x
6
=u x
1
=r 4s 2t +u
x
3
= 6s u
S =
1 4 2 1
1 0 0 0
0 6 0 1
: , , ,
0 1 0 0
0 0 1 0
0 0 0 1
r s t u r s t u





+ + +








61. 2x
1
2x
2
+ 4x
3
2x
4
=0
2x
1
+ x
2
+ 7x
3
+ 4x
4
=0
x
1
4x
2
x
3
+ 7x
4
=0
4x
1
12x
2
20x
4
=0
The matrix of coefficients A =
2 2 4 2
2 1 7 4
1 4 1 7
4 12 0 20








RREF(A) =
1 0 3 0
0 1 1 0
0 0 0 1
0 0 0 0







x
1
+ 3x
3
=0
x
2
+ x
3
=0
x
4
=0
S =
3
1
:
1
0
r r











Let x
3
=r x
1
=3r
x
2
=r
x
3
=r
x
4
=0

270 CHAPTER 3 Linear Algebra
62. 3x
1
+ 6x
3
+ 3x
4
+9x
5
=0
x
1
+3x
2
4x
3
8x
4
+3x
5
=0
x
1
6x
2
+14x
3
+19x
4
+3x
5
=0
The matrix of cooefficients A =
3 0 6 3 9
1 3 4 8 3
1 6 14 19 3







has RREF =
1 0 2 1 3
0 1 2 3 0
0 0 0 0 0







x
1
+ 2x
3
+ x
4
+3x
5
=0
x
2
2x
3
3x
4
=0

x
1
=2x
3
x
4
3x
5

x
2
= 2x
3
+3x
4


Let x
3
=r, x
4
=s, x
5
=t x
1
=2r s 3t
x
2
=2r +3s
S =
2 1 3
2 3 0
: , , 1 0 0
0 1 0
0 0 1
r s t r s t




+ +










Nonlinear Differential Equations
63.
2
y y = . Writing the equation in differential form, we have
2
y dy dt

= . We get the general


solution
1
y
c t
=

. Hence, from 0 c = and 1, we have two solutions


( )
1
1
y t
t
= , ( )
2
1
1
y t
t
=

.
But, if we compute
( ) ( )
1 2
1 1
1
y t y t
t t
+ = +


it would not be a solution of the DE. So the solution set of this nonlinear DE is not a vector space.

64. sin 0 y y + = . Assume that y is a solution of the equation. Hence, we have the equation
sin 0 y y + = .
But cy does not satisfy the equation because we have
( ) ( ) ( ) sin sin 0 cy cy c y y

+ + = .
65.
1
0 y
y
+ = . From the DE we can see that the zero vector is not a solution, so the solution space of
this nonlinear DE is not a vector space.

DE Solution Spaces
66. 2
t
y y e + = . Not a vector space, it doesnt contain the zero vector.

SECTION 3.5 Vector Spaces and Subspaces 271
67.
2
0 y y + = . The solutions are
1
y
t c
=

, and the sum of two solutions is not a solution, so the set


of all solutions of this nonlinear DE do not form a vector space.

68. 0 y ty + = . If
1
y ,
2
y satisfy the equation, then

1 1
2 2
0
0.
y ty
y ty
+ =
+ =

By adding, we obtain
( ) ( )
1 1 2 2
0 y ty y ty + + + = ,
which from properties of the derivative is equivalent to
( ) ( )
1 1 2 2 1 1 2 2
0 c y c y t c y c y

+ + + = .
This shows the set of solutions is a vector space.

69. ( ) 1 sin 0 y t y + + = . If
1
y ,
2
y satisfy the equation then

( )
( )
1 1
2 2
1 sin 0
1 sin 0.
y t y
y t y
+ + =
+ + =

By adding, we have
( ) ( )
1 1 1 1
1 sin 1 sin 0 y t y y t y + + + + + =

,
which from properties of the derivative is equivalent to
( ) ( )( )
1 1 2 2 1 1 2 2
1 sin 0 c y c y t c y c y

+ + + + = ,
which shows the set of solutions is a vector space. This is true for the solution set of any linear
homogeneous DE.

Line of Solutions
70. (a) [ ] [ ] [ ] 0, 1 2, 3 2, 1 3 t t t t = + = + = + x p h



Hence, calling
1
x ,
2
x the coordinates of the vector [ ]
1 2
, x x = x

, we have
1
2 x t = ,
2
1 3 x t = + .

(b) [ ] [ ] 2, 1, 3 2, 3, 0 t = + x



(c) Showing that solutions of 0 y y + = are closed under vector addition is a result of the
fact that the sum of two solutions is a solution. The fact that solutions are closed under
scalar multiplication is a result of the fact that scalar multiples of solutions are also
solutions. The zero vector (zero function) is also a solution because the negative of a
solution is a solution. Computing the solution of the equation gives ( )
t
y t ce

= , which is
scalar multiple of
t
e

. We will later discuss that this collection of solutions is a one-


dimensional vector space.

(d) The solutions of y y t + = are given by ( ) ( ) 1
t
y t t ce

= + . The abstract point of view is


a line through the vector 1 t (remember functions are vectors here) in the direction of
the vector
t
e

.

272 CHAPTER 3 Linear Algebra
(e) The solution of any linear equation Ly f = can be interpreted as a line passing through
any particular solution
p
y in the direction of any homogeneous solution
h
y ; that is,
p h
y y cy = + .

Orthogonal Complements
71. To prove: V

={ 0
n
= u u v

for every v

V} is a subspace of
n

Nonempty: 0 = 0 v


for every v

V
Closure: Let a and b and , , u w

V

Let v

V

( ) ( ) ( )
( ) ( )
a b a b
a b
a b
+ = +
= +
= +
u w v u v w v
u v w v
0 0





72. To prove: V V

={ } 0


and

0 0

V V since

V is a subspace and 0 v


=0 for every v

V , so

V
{ } 0

V V


Now suppose

V V where w

=[w
1
, w
2
, , w
n
]
Then = w v 0


for all v

V
However w

V so w w

=0

2 2 2
1 2
... 0
n
w w w + + + =
w
1
=w
2
= =w
n
=0
= w 0



Suggested Journal Entry
73. Student Project

SECTION 3.6 Basis and Dimension 273


3.6

Basis and Dimension


The Spin on Spans
1.
2
= V R . Let

[ ] [ ] [ ]
[ ]
[ ]
1 2
2 2
2
, 0, 0 1, 1
,
1, 1
x y c c
c c
c
= +
=
=
.
The given vectors do not span
2
R , although they span the one-dimensional subspace
[ ] { }
1, 1 k k R .

2.
3
= V R . Letting
[ ] [ ] [ ] [ ]
1 2 3
, , 1, 0, 0 0, 1, 0 2, 3, 1 a b c c c c = + +
yields the system of equations

1 3
2 3
3
2
3
c c a
c c b
c c
+ =
+ =
=

or

3
2 3
1 3
3 3
2 2 .
c a
c b c b c
c a c a c
=
= =
= =

Hence, W spans
3
R .

3.
3
= V R . Letting
[ ] [ ] [ ] [ ] [ ]
1 2 3 4
, , 1, 0, 1 2, 0, 4 5, 0, 2 0, 0, 1 a b c c c c c = + + +
yields

1 2 3
1 2 3 4
2 5
0
4 2 .
a c c c
b
c c c c c
= +
=
= + + +

These vectors do not span
3
R because they cannot give any vector with 0 b .

4.
2
= V P . Let
( ) ( ) ( )
2 2
1 2 3
1 1 2 3 at bt c c c t c t t + + = + + + + .
Setting the coefficients of
2
t , t, and 1 equal to each other gives

2
3
2 3
1 2 3
:
: 2
1: 3
t c a
t c c b
c c c c
=
=
+ + =

which has the solution
1
5 c c b a = ,
2
2 c b a = + ,
3
c a = . Any vector in V can be written as a
linear combination of vectors in W. Hence, the vectors in W span V.
274 CHAPTER 3 Linear Algebra
5.
2
= V P . Let
( ) ( ) ( )
2 2 2
1 2 3
1 1 at bt c c t c t c t t + + = + + + + .
Setting the coefficients of
2
t , t, and 1 equal to each other gives

2
2 3
1 3
1 2
:
:
1: .
t c c a
t c c b
c c c
+ =
=
+ =

If we add the first and second equations, we get

1 2
1 2
c c a b
c c c
+ = +
+ =

This means we have a solution only if c a b = + . In other words, the given vectors do not span
2
P ; they only span a one-dimensional vector space of
3
R .

6.
22
= V M . Letting

1 2 3 4
1 1 0 0 1 0 0 1
0 0 1 1 1 0 0 1
a b
c c c c
c d

= + + +



we have the equations

1 3
1 4
2 3
2 4
.
c c a
c c b
c c c
c c d
+ =
+ =
+ =
+ =

If we add the first and last equation, and then the second and third equations, we obtain the
equations

1 2 3 4
1 2 3 4
.
c c c c a d
c c c c b c
+ + + = +
+ + + = +

Hence, we have a solution if and only if a d b c + = + . This means we can solve for
1
c ,
2
c ,
3
c ,
4
c
for only a subset of vectors in V. Hence, W does not span
4
R .

Independence Day
7.
2
= V R . Setting
[ ] [ ] [ ]
1 2
1, 1 1, 1 0, 0 c c + =
we get

1 2
1 2
0
0
c c
c c
=
+ =

which does not imply
1 2
0 c c = = . For instance, if we choose
1
1 c = , then
2
1 c = also. Hence, the
vectors in W are linearly dependent.

SECTION 3.6 Basis and Dimension 275
8.
2
= V R . Setting
[ ] [ ] [ ]
1 2
1, 1 1, 1 0, 0 c c + =
we get

1 2
1 2
0
0
c c
c c
+ =
=

which implies
1 2
0 c c = = . Hence, the vectors in W are linearly independent.

9.
3
= V R . Setting
[ ] [ ] [ ] [ ]
1 2 3
1, 0, 0 1, 1, 0 1, 1, 1 0, 0, 0 c c c + + =
we get

1 2 3
2 3
3
0
0
0
c c c
c c
c
+ + =
+ =
=

which implies
1 2 3
0 c c c = = = . Hence vectors in W are linearly independent.

10.
3
= V R . Setting
[ ] [ ] [ ]
1 2
2, 1, 4 4, 2, 8 0, 0, 0 c c + =
we get

1 2
1 2
1 2
2 4 0
2 0
4 8 0
c c
c c
c c
+ =
=
+ =

which (the equations are all the same) has a nonzero solution
1
2 c = ,
2
1 c = . Hence, the vectors
in W are linearly dependent.

11.
3
= V R . Setting
[ ] [ ] [ ] [ ]
1 2 3
1, 1, 8 3, 4, 2 7, 1, 3 0, 0, 0 c c c + + =
we get

1 2 3
1 2 3
1 2 3
3 7 0
4 0
8 2 3 0
c c c
c c c
c c c
+ =
+ =
+ + =

which has only the solution
1 2 3
0 c c c = = = . Hence, the vectors in W are linearly independent.

12.
1
= V P . Setting

1 2
0 c c t + = ,
we get
1
0 c = ,
2
0 c = . Hence, the vectors in W are linearly independent.

276 CHAPTER 3 Linear Algebra
13.
1
= V P . Setting
( ) ( )
1 2
1 1 0 c t c t + + =
we get

1 2
1 2
0
0
c c
c c
+ =
=

which has a unique solution
1 2
0 c c = = . Hence, the vectors in W are linearly independent.

14.
2
= V P . Setting
( )
1 2
1 0 c t c t + = ,
we get

1 2
2
0
0
c c
c
=
=

which implies
1 2
0 c c = = . Hence, the vectors in W are linearly independent.

15.
2
= V P . Setting
( ) ( )
2
1 2 3
1 1 0 c t c t c t + + + =
we get

1 2
1 2
3
0
0
0
c c
c c
c
+ =
=
=

which implies
1 2 3
0 c c c = = = . Hence, the vectors in W are linearly independent.

16.
2
= V P . Setting
( ) ( ) ( )
2 2
1 2 3
3 1 2 5 0 c t c t c t t + + + =
we get

1 2 3
1 3
2 3
3 5 0
0
2 0
c c c
c c
c c
=
=
+ =

which has a nonzero solution
1
1 c = ,
2
2 c = ,
3
1 c = . Hence, the vectors in W are linearly
dependent.

17.
22
= V D . Setting

1 2
0 1 0 0 0
0 0 0 0 1
a
c c
b

= +



we get
1
c a = ,
2
c b = . Hence, these vectors are linearly independent and span
22
D .

SECTION 3.6 Basis and Dimension 277
18.
22
= V D . Setting
1 2
0 1 0 1 0
0 0 1 0 1
a
c c
b

= +



we get
1 2
c c a + = ,
1 2
c c b = . We can solve these equations for
1
c ,
2
c , and hence these vectors
are linearly independent and span
22
D .

Function Space Dependence
19.
{ }
,
t t
S e e

= . We set

1 2
0
t t
c e c e

+ = .
Because we assume this holds for all t, it holds in particular for 0 t = , 1, so

1 2
1
1 2
0
0
c c
ec e c

+ =
+ =

which has only the zero solution
1 2
0 c c = = . Hence, the functions are linearly independent.

20.
{ }
2
, ,
t t t
S e te t e = . We assume

2
1 2 3
0
t t t
c e c te c t e + + =
for all t. We let 0 t = , 1, 2, so

1
1 2 3
2 2 2
1 2 3
0
0
2 4 0
c
ec ec ec
e c e c e c
=
+ + =
+ + =

which has only the zero solution
1 2 3
0 c c c = = = . Hence, these vectors are linearly independent.

21. { } sin , sin2, sin3 S t t t = . Letting

1 2 3
sin sin2 sin3 0 c t c t c t + + =
for all t. In particular if we choose three values of t, say
6

,
4

,
2

, we obtain three equations to


solve for
1
c ,
2
c ,
3
c , namely,

1 2 3
1 2 3
1 3
1 3
0
2 2
2 2
0
2 2
0.
c c c
c c c
c c


+ + =






+ + =



=


We used Maple to compute the determinant of this coefficient matrix and found it to be
3 1
6
2 2
+ . Hence, the system has a unique solution
1 2 3
0 c c c = = = . Thus, sint , sin2t , and
sin3t are linearly independent.

278 CHAPTER 3 Linear Algebra
22.
{ }
2 2
1, sin , cos S t t = . Because

2 2
1 sin cos 0 t t =
the vectors are linearly dependent.

23. ( )
{ }
2
1, 1, 1 S t t = . Setting
( ) ( )
2
1 2 3
1 1 0 c c t c t + + =
we get for the coefficients of 1, t,
2
t the system of equations

1 2 3
2 3
3
0
2 0
0
c c c
c c
c
+ =
=
=

which has the only zero solution
1 2 3
0 c c c = = = . Hence, these vectors are linearly independent.

24.
{ }
, , cosh
t t
S e e t

= . Because

( )
1
cosh
2
t t
t e e

= +
we have that 2cosh 0
t t
t e e

= is a nontrivial linear combination that is identically zero for all


t. Hence, the vectors are linearly dependent.

25.
{ }
2
sin , 4, cos2 S t t = . Recall the trigonometric identity
( )
2
1
sin 1 cos2
2
t t = ,
which can be rewritten as
( )
2
1
2sin 4 cos2 0
4
t t + = .
Hence, we have found a nontrivial linear combination of the three vectors that is identically zero.
Hence, the three vectors are linearly dependent.

Independence Testing
26. We will show the only values for which

2
1 2
0 2
0
t t
t t
e e
c c
e e

+ =




for all t are
1 2
0 c c = = and, hence, conclude that the vectors are linearly independent. If it is true
for all t, then it must be true for 0 t = (which is the easiest place to test), which yields the two
linear equations

1 2
1 2
2 0
0
c c
c c
+ =
+ =

whose only solution is
1 2
0 c c = = . Hence, the vectors are linearly independent. (This test works
only for linear independence.)
Another approach is to say the vectors are linearly independent because clearly there is
no constant k such that one vector is k times the other vector for all t.

SECTION 3.6 Basis and Dimension 279
27. We will show that

1 2
sin cos 0
cos sin 0
t t
c c
t t

+ =



for all t implies
1 2
0 c c = = , and hence, the vectors are linearly independent. If it is true for all t,
then it must be true for 0 t = , which gives the two equations
2
0 c = ,
1
0 c = . This proves the
vectors are linearly independent.

Another approach is to say that the vectors are linearly independent because clearly there
is no constant k such that one vector is k times the other vector for all t.

28. We write

2
2
1 2 3
2
0
2 2 3 0
0
t t t
t t t
t t t
e e e
c e c e c e
e e e





+ + =







for all t and see if there are nonzero solutions for
1
c ,
2
c , and
3
c .

2
2 2 4
2
2 2 3 0
t t t
t t t t t
t t t
e e e
e e e e e
e e e

= for all 0 t .
We see by Cramers Rule that there is a unique solution
1 2 3
0 c c c = = = . Therefore the vectors are
linearly independent.

29. We write

8
8
1 2 3
8
2 0
4 0 0
2 0
t t t
t t
t t t
e e e
c e c c e
e e e






+ + =







for all t and see if there are nonzero solutions for
1
c ,
2
c ,
3
c . Because the above equation is
assumed true for all t, it must be true for 0 t = (the easy case), or

1 2 3
1 3
1 2 3
2 0
4 0
2 0.
c c c
c c
c c c
+ + =
+ =
+ =

Writing this in matrix form gives

1
2
3
1 1 2 0
4 0 1 0
1 1 2 0
c
c
c


=



.

The determinant of the coefficient matrix is 18, so the only solution of this linear system is
1 2 3
0 c c c = = = , and thus the vectors are linearly independent.

280 CHAPTER 3 Linear Algebra
Twins?
30. We have { } ( ) ( ) { }
1 2
span cos sin , cos sin cos sin cos sin t t t t c t t c t t + = + +

( ) ( ) { }
{ }
{ }
1 2 1 2
1 2
cos sin
cos sin
span sin , cos .
c c t c c t
C t C t
t t
= + +
= +
=


A Questionable Basis
31. The set
1 0 2
1 , 1 , 1
0 1 1








is not a basis since
1 0 2
1 1 0 2
1 1 1 1 1 2 2 0
1 1 1 1
0 1 1
= = + =



One of the many possible answers to the second part is:
1 0 1
1 , 1 0
0 1 0









Wronskian
32. We assume that the Wronskian function
[ ]( ) ( ) ( ) ( ) ( ) , 0 W f g t f t g t f t g t =
for every [ ] 0, 1 t . To show f and g are linearly independent on [0, 1], we assume that
( ) ( )
1 2
0 c f t c g t + = for all t in the interval [0, 1]. Differentiating, we have
( ) ( )
1 2
0 c f t c g t + =
on [0, 1]. Hence, we have the two equations

( ) ( )
( ) ( )
1
2
0
0
f t g t c
f t g t c

=



.
The determinant of the coefficient matrix is the Wronskian of f and g, which is assumed to be
nonzero on [0, 1]. Since
1 2
0 c c = = , the vectors are linearly independent.

Zero Wronskian Does Not Imply Linear Dependence
33. a) f(t) =t
2
g(t) =
2
2
0
0
t t
t t

<


f (t) =2t g (t) =
2 0
2 0
t t
t t

<


For t 0 W =
2 2
2 2
t t
t t
=0
For t <0 W =
2 2
2 2
t t
t t

=2t
3
+2t
3
=0
W =0 on (, )

b) f and g are linearly independent because f(t) kg(t) on (, ) for every k R.

SECTION 3.6 Basis and Dimension 281
Linearly Independent Exponentials
34. We compute the Wronskian of f and g:
[ ]( )
( ) ( )
( ) ( )
( ) ( ) ( )
( ) , 0
at bt
a b t a b t a b t
at bt
f t g t e e
W f g t be ae e b a
f t g t ae be
+ + +
= = = =


for any t provided that b a . Hence, f and g are linearly independent if b a and linearly
dependent if b a = .

Looking Ahead
35. The Wronksian is

( )
( )
2 2 2
1 0
1
t t
t t t
t t
e te
W e t te e
e e t
= = + =
+
.
Hence, the vectors are linearly independent.

Revisiting Linear Independence
36. The Wronskian is

( ) ( ) ( )
3
3 3 3
3
3 3 3
3
3 3
5
5 3 5 5
5 3
5 9 5 9 5 3
5 9
45 15 45 5 15 5 80 0
t t t
t t t t t t
t t t t t t
t t t t t t
t t t
t t
e e e
e e e e e e
W e e e e e e
e e e e e e
e e e
e e

= = +

= + + =


Hence, the vectors are linearly independent.

Independence Checking
37. W =
2 2
5 cos sin
sin cos
0 sin cos 5 5(sin cos )
cos sin
0 cos sin
t t
t t
t t t t
t t
t t

= = +



=5 0 The set {5, cos t, sin t} is linearly independent on

38. W =
1
0 1 1(1 1) 2 0
0
t t
t t
t t
t t
t t
e e
e e
e e
e e
e e

= = + =
The set {e
t
, e
t
, 1} is linearly independent on

39. W =
2
2 2
2 2
1 1 2 2
1 1
0 0 2
t t t
t t
t
+
+
=

=2(2 t 2 +t)
=8 0
{2 +t, 2 t, t
2
} is linearly independent on

40. W =
2 2
2 2
3 4 2 1
2 1 3 4 2
6 2 2 6 2
2 2 6 2
6 0 2
t t t
t t t t
t t
t t


= +
=
( ) ( )
2 2 2 2
6 4 2( 1) 2 6 8 12 t t t t + =4 0
{3t
2
4, 2t, t
2
1} is linearly independent on
282 CHAPTER 3 Linear Algebra
41. W =
2 2
cosh sinh
cosh sinh
sin cosh
t t
t t
t t
=

2 2
2 2 2 2
2 2
2 2 2 2
1 0
4 4 4 4
t t t t
t t t t
e e e e
e e e e


+
=


+ + +
= = + =



{cosh t, sinh t} is linearly independent on

42. W =
cos sin
( sin ) cos cos sin
t t
t t t t
e t e t
e t e t e t e t + +

=
2 2 2 2 2
cos cos sin sin sin cos
t t t t
e t e t t e t e t t + +
=
2 2 2 2
(cos sin ) 0
t t
e t t e + = for all t
{ cos , sin }
t t
e t e t is linearly independent on

Getting on Base in
2

43. Not a basis because [ ] { }
1, 1 does not span
2
.

44. A basis because [ ] [ ] { }
1, 2 , 2, 1 are linearly independent and span
2
.

45. ( ) ( ) { }
1, 1 , 1, 1 is not a basis because [ ] [ ] 1, 1 1, 1 = , hence they are linearly dependent.

46. [ ] [ ] { }
1, 0 , 1, 1 is a basis because the vectors are linearly independent and span
2
.

47. [ ] [ ] [ ] { }
1, 0 , 0, 1, 1, 1 is not a basis because the vectors are linearly dependent.

48. [ ] [ ] [ ] [ ] { }
0, 0 , 1, 1, 2, 2 , 1, 1 is not a basis because the vectors are linearly dependent.

The Base for the Space
49.
3
= V : S is not a basis because the two vectors are not enough to span
3
.

50.
3
= V : Yes, S is a basis because the vectors are linearly independent and span
3
.

51.
3
= V : S is not a basis because four vectors are linearly dependent in
3
.

52.
2
= V P : Clearly the two vectors
2
3 1 t t + + and
2
2 4 t t + are linearly independent because they
are not constant multiples of one another. They do not span the space because
2
dim 3 = P .

53.
3
= V P : The
3
dim 4 = P ; i.e.,
{ }
3 2 1
, , , 1 t t t is a basis for
3
P .



SECTION 3.6 Basis and Dimension 283
54.
4
= V P : We assume that
( ) ( ) ( ) ( )
4 3 2
1 2 3 4 5
3 4 1 5 1 0 c t c t c t c t c t t + + + + + + + =
and compare coefficients. We find a homogeneous system of equations that has only the zero
solution
1 2 3 4 5
0 c c c c c = = = = = . Hence, the vectors are linearly independent. To show the
vectors span
4
P , we set the above linear combination equal to an arbitrary vector
4 3 2
at bt ct dt e + + + + , and compare coefficients to arrive at a system of equations, which can
besolved for
1
c ,
2
c ,
3
c ,
4
c , and
5
c in terms of a, b, c, d, e. Hence, the vectors span
4
P so that
they are a basis for
4
P .

55.
22
= V M : We assume that

1 2 3 4
1 0 0 1 0 0 1 1 0 0
0 0 0 0 1 0 1 1 0 0
c c c c

+ + + =



yields the equations

1 4
2 4
3 4
4
0
0
0
0
c c
c c
c c
c
+ =
+ =
+ =
=


which has the zero solution
1 2 3 4
0 c c c c = = = = . Hence, the vectors are linearly independent. If
we replace the zero vector on the right of the preceding equation by an arbitrary vector

a b
c d



,
we get the four equations

1 4
2 4
3 4
4
c c a
c c b
c c c
c d
+ =
+ =
+ =
=


This yields the solution
4 3 2 1
, , , c d c c d c b d c a d = = = =

Hence, the four given vectors span
22
M . Because they are linearly independent and span
22
M ,
they are a basis.

56.
23
= V M : If we set a linear combination of these vectors to an arbitrary vector, like

1 2 3 4 5
1 0 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 1 1 1 0 1 1 1
a b c
c c c c c
d e f

+ + + + =



we arrive at the algebraic equations
284 CHAPTER 3 Linear Algebra

1 2
2
1
3 4 5
4 5
3 5
.
c c a
c b
c c
c c c d
c c e
c c f
+ =
=
=
+ + =
+ =
+ =

Looking at the first three equations gives
1
c a b = ,
1
c c = . If we pick an arbitrary matrix such
that a b c , we have no solution. Hence, the vectors do not span
22
M and do not form a basis.
(They are linearly independent however.)

Sizing Them Up
57.
{ }
1, 2 3 1 2 3
, 0 x x x x x x = + + =

W
Letting
2
x = ,
3
x = , we can write
1
x = . Any vector in W can be written as

1
2
3
1 1
1 0
0 1
x
x
x



= = +




where and are arbitrary real numbers. Hence, The dimension of W is 2; a basis is
{[ ] 1, 1, 0 ,[ ] 1, 0, 1 }.

58.
{ }
1, 2 3 4 1 3 2 4
, , 0, x x x x x x x x = + = =

W
Letting
3
x = ,
4
x = , we have

1
2
3
4
.
x
x
x
x

=
=
=
=

Any vector in W can be written as

1
2
3
4
1 0
0 1
1 0
0 1
x
x
x
x




= = +




where and are arbitrary real numbers. Hence, the two vectors [ ] 1, 0, 1, 0 and [ ] 0, 1, 0, 1
form a basis of W, which is only two-dimensional.


SECTION 3.6 Basis and Dimension 285
Polynomial Dimensions
59. { } , 1 t t . We write
( )
1 2
1 at b c t c t + = +
yielding the equations

1 2
2
:
1: .
t c c a
c b
+ =
=

We can represent any vector at b + as some linear combination of t and 1 t . Hence, we have
that { } , 1 t t spans a two-dimensional vector space.

60.
{ }
2
, 1, 1 t t t + . We write
( ) ( )
2 2
1 2 3
1 1 at bt c c t c t c t + + = + + +
yielding the equations

2
3
1 2
2 3
:
:
1: .
t c a
t c c b
c c c
=
+ =
+ =

Because we can solve this system for
1
c ,
2
c ,
3
c in terms of a, b, c getting

1
2
3
c a c b
c a c
c a
= + +
=
=

the subspace spans the entire three-dimensional vector space
2
P .

61.
{ }
2 2
, 1, 1 t t t t + . We can see that

( ) ( )
2 2
1 1 t t t t = + + ,
so that the dim of the subspace is 2 and is spanned by any two of the vectors in the set.

Solution Basis
62. Letting z = we solve for x and y, obtaining 4 x = , 5 y = . An arbitrary solution of the
system can be expressed as

4 4
5 5
1
x
y
z



= =



.
Hence, the vector [4, 5, 1] is a basis for the solutions.

Solution Spaces for Linear Algebraic Systems
63. The matrix of coefficients for the system in Problem 61, Section 3.5
has RREF
1 0 3 0
0 1 1 0
0 0 0 1
0 0 0 0






, so
1 3
2 3
4
3 0
0
0
x x
x x
x
+ =
+ =
=
.
Let r =x
3
; then
286 CHAPTER 3 Linear Algebra
W =
3
1
:
1
0
r










so a basis is
3
1
1
0









.
Dim W =1.

64. The matrix of coefficients for the system, by Problem 62, Section 3.5,
has RREF =
1 0 2 1 3
0 1 2 3 0
0 0 0 0 0






, so
1 3 4 5
2 3 4
2 3 0
2 3 0
x x x x
x x x
+ + + =
=
or
1 3 4 5
2 3 4
2 3
2 3
x x x x
x x x
=
= +

Therefore a basis for W is
2 1 3
0 2 3
, , 1 0 0
0 1 0
0 0 1













.
Dim W =3.

DE Solution Spaces
65.
n
n
dy
dt
=0 a) By successive integration we obtain
y =c
n1
t
n1
+c
n2
t
n2
+ +c
1
t +c
0
for c
n1
, c
0

which is a general description of all
elements in P
n1
=the solution space ( )
n
C
b) A basis for P
n1
is {1, t, , t
n1
}
Dim P
n1
=n

66. y 2y =0 This is a first order linear DE with solution (by either method of Section 2.2)
y =Ce
2t

a) The solution space S ={Ce
2t
: C R} ( )
n
C
b) A basis B ={e
2t
}, dim S =1.

67. y 2ty =0 By the methods of Section 2.2

2
t
y Ce =
a) S =
2
{ : } ( )
t n
Ce C C
B =
2
{ },
t
e dim S =1.

SECTION 3.6 Basis and Dimension 287
68. y +(tan t)y =0 a) By the methods of Section 2.2
y =C cos t , t ,
2 2





S =
1
cos : , , ,
2 2 2 2
C t C t





C
b) A basis B = cos : ,
2 2
t t





;
dim S =1.

69. y +y
2
=0 y
2
is not a linear function so
y +y
2
=0 is not a linear differential equation.
By separation of variables
y = y
2

2
dy
dt
y
=


1
1
1
1
y
t c
t c
y
y
t c

= +

=
=


But these solutions do not form a vector space
Let k R, k 1; then

k
t c
is not a solution of the ODE.
Hence
1
:c
t c


is not a vector space.

70. y +(cos t)y =0 By the method of Section 2.2 y =Ce
sin t

a) S =
sin 2
{ : } ( )
t
Ce C

C
b) B =
{ }
sint
e

is a basis for S;
dim S =1.

288 CHAPTER 3 Linear Algebra
Basis for Subspaces of R
n

71. W ={(a, 0, b, a b +c): a, b, c R}
=
1 0 0
0 0 0
: , ,
0 1 0
1 1 1
a b c a b c




+ +






so
1 0 0
0 0 0
, ,
0 1 0
1 1 1









is a basis for W.
Dim W =3.

72. W ={(a, a b, 2a +3b): a, b R}
=
1 0
1 1 : ,
2 3
a b a b



+






so
1 0
1 , 1
2 3







is a basis for W.
Dim W =2.

73. W ={(x +y +z, x +y, 4z, 0): x, y, z R}
=
1 1 1
1 1 0
: , , ,
0 0 4
0 0 0
x y z x y z




+ +






(Note that x +y can be a single element of R.)
so
1 1
1 0
,
0 4
0 0










is a basis for W.
Dim W =2.

SECTION 3.6 Basis and Dimension 289
Two-by-Two Basis
74. Setting

1 2 3
1 0 0 1 0 0 0 0
0 0 1 0 1 1 0 0
c c c

+ + =



gives
1
0 c = ,
2
0 c = , and
3
0 c = . Hence, the given vectors are linearly independent. If we add the
vector

0 0
1 0



,
then the new vectors are still linearly independent (similar proof), and an arbitrary 2 2 matrix
can be written as

1 2 3 4
1 0 0 1 0 0 0 0
0 1 1 0 1 1 1 0
a b
c c c c
c d

= + + +



because it reduces to

1
2
3
2 3 4
.
c a
c b
c d
c c c c
=
=
=
+ + =

This yields

1
2
3
4
c a
c b
c d
c b d
=
=
=
=

in terms of a, b, c, and d and form a basis for
22
M , which is four-dimensional.

Basis for Zero Trace Matrices
75. Letting

1 2 3
1 0 0 1 0 0
0 1 0 0 1 0
a b
c c c
c d

+ + =



we find
1
a c = ,
2
b c = ,
3
c c = ,
1
d c = . Given 0 a b c d = = = = implies
1 2 3 4
0 c c c c = = = = ,
which shows the vectors (matrices) are linearly independent. It also shows they span the set of
2 2 matrices with trace zero because if 0 a d + = , we can solve for
1
c a d = = ,
2
c b = ,
3
c c = .
In other words we can write any zero trace 2 2 matrix as follows as a linear combination of the
three given vectors (matrices):

1 0 0 1 0 0
0 1 0 0 1 0
a b
a b c
c a

= + +



.
Hence, the vectors (matrices) form a basis for the 2 2 zero trace matrices.

290 CHAPTER 3 Linear Algebra
Hyperplane Basis
76. Solving the equation
3 2 6 0 x y z w + + =
for x we get
3 2 6 x y z w = + .
Letting y = , z = and w = , we can write
3 2 6 x = + .
Hence, an arbitrary vector ( ) , , , x y z w in the hyperplane can be written

3 2 6 3 2 6
1 0 0
0 1 0
0 0 1
x
y
z
w

+


= = + +



.
The set of four-dimensional vectors

3 2 6
1 0 0
, ,
0 1 0
0 0 1











is a basis for the hyperplane.
77. Symmetric Matrices
W = : , ,
a b
a b c
b c




is the subspace of all symmetric 2 2 matrices
A basis for W is
1 0 0 1 0 0
, ,
0 0 1 0 0 1




.
Dim W =3.
Making New Basis From Old:
78. B
1
={, , } i j k

(Many correct answers)
A typical answer is B
2
={, , } + + i i j i k


To show linear independence:
Set
1
c i

+c
2
( ) + i j

+c
3
( ) + i k

=0


c
1
+c
2
+c
3
=0
c
1
+c
2
=0
c
1
+c
3
=0

1 1 1
1 1 0 1 0
1 0 1
=
B
2
is a basis since dim R
3
=3
SECTION 3.6 Basis and Dimension 291
79. B
1
=
1 0 0 0
,
0 0 0 1




is a basis for D
A typical answer:
B
2
=
1 0 1 0
,
0 1 0 1




Both elements are diagonal and B
2
is linearly independent
dim D =2
80. B
1
={sin t, cos t} is a basis for the solution space S
so dim S =2
Typical answer
B
2
={sin t +cos t, sin t cos t}
Both elements are in S and B
2
is linearly independent

Basis for
2
P
81. We first show the vectors span
2
P by selecting an arbitrary vector from
2
P and show it can be
written as a linear combination of the three given vectors. We set

( ) ( )
2 2
1 2 3
1 1 at bt c c t t c t c + + = + + + + +
and try to solve for
1
c ,
2
c ,
3
c in terms of a, b, c. Setting the coefficients of
2
t , t, and 1 equal to
each other yields

2
1
1 2
1 2 3
:
:
1: ,
t c a
t c c b
c c c c
=
+ =
+ + =

giving the solution
1 2 3
, , . c a c a b c b c = = + = +

Hence, the set spans
2
P . We also know that the vectors

{ }
2
1, 1, 1 t t t + + +
are independent because setting

( ) ( )
2
1 2 3
1 1 0 c t t c t c + + + + + =
we get

1
1 2
1 2 3
0
0
0
c
c c
c c c
=
+ =
+ + =

which has only the solution
1 2 3
0 c c c = = = . Hence, the vectors are a basis for
2
P , for example,

( ) ( ) ( )
2 2
3 2 1 3 1 1 1 11 t t t t t + + = + + + .

292 CHAPTER 3 Linear Algebra
82. True/False Questions
a) True
b) False, dim W =2
c) False, The given set is made up of vectors in R
2
, not R
4
. The basis for W is made up of
vectors in R
4
.

83. Essay Question
Points to be covered in the essay.
1. Elements of W are linear combinations of
1
1
0
0






and
2
0
1
1







which span W, a subspace of the vector space R
4

The set
1 2
1 0
,
0 1
0 1










is linearly independent and in consequence, it is a basis for W.

Convergent Sequence Space
84. V is a vector space since the addition and scalar multiplication operations follow the rules for R
and the operations
{ } { } { } and { } { }
n n n n n n
a b a b c a ca + = + =
are the precise requirements for closure under vector addition and scalar multiplication.
Zero element is {0} where a
n
=0 for all n
Additive Inverse for {a
n
} is {a
n
}
Let W ={ } {2 }:{ }
n n
a a V Clearly {0} =W and {2a
n
} +{2b
n
} ={2a
n
+2b
n
} =2{a
n
+b
n
}
Also k{2a
n
} ={2ka
n
} for every k R
W is a subspace
dim W =
A basis is { } {1,0,0,0,...}{0,1,0,0} and so forth .

Cosets in
3
R
85. [ ] { }
1 2 3 1 2 3
, , 0 x x x x x x = + + = W , [ ] 0, 0, 1 = v


We want to write W in parametric form, so we solve the equation

1 2 3
0 x x x + + =
by letting
2
x = ,
3
x = and solving for
1
x = . These solutions can be written as
[ ] [ ] { }
1, 1, 0 1, 0, 1 : , + R ,
SECTION 3.6 Basis and Dimension 293
so the coset of [0, 0, 1] in W is the collection of vectors
[ ] [ ] [ ] { }
0, 0, 1 1, 1, 0 1, 0, 1 , + + R .
Geometrically, this describes a plane passing through (0, 0, 1) and parallel to
1 2 3
0 x x x + + = .

86. [ ] { }
1 2 3 3
, , 0 x x x x = = W , [ ] 1, 1, 1 = v


Here a coset through the point (1, 1, 1) is given by the points
[ ] [ ] [ ] { }
1, 1, 1 1, 0, 0 0, 1, 0 + +
where and are arbitrary real numbers. This describes the plane through (1, 1, 1) parallel to
the
1
x
2
x plane (i.e., the subspace W).

More Cosets
87. The coset through the point (1, 2, 1) is given by the points
( ) ( ) { }
1, 2, 1 1, 3, 2 t + ;
t is an arbitrary number. This describes a line through (1, 2, 1) parallel to the line ( ) 1, 3, 2 t .

Line in Function Space
88. The general solution of
2
2
t
y y e

+ = is
( )
2 2 t t
y t ce te

= + .
We could say the solution is a line in the vector space of solutions, passing through
2t
te

in the
direction of
2t
e

.

Mutual Orthogonality
Proof by Contradiction
89. Let
1
{ ,..., }
n
v v

be a set of mutually orthogonal nonzero vectors
and suppose they are not linearly independent.
Then for some j,
j
v

can be written as a linear combination of the others



1 1
...
j n n
c c = + + v v v

(excluding
j j
c v

)

1 1
...
0
j j j n j n
c c = + +
=
v v v v v v


*
j
v

cannot be zero

Suggested Journal Entry I
90. Student Project

Suggested Journal Entry II
91. Student Project

Suggested Journal Entry III
92. Student Project

Potrebbero piacerti anche