0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

60 visualizzazioni84 paginemath149

Oct 12, 2019

© © All Rights Reserved

PPTX, PDF, TXT o leggi online da Scribd

math149

© All Rights Reserved

0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

60 visualizzazioni84 paginemath149

© All Rights Reserved

Sei sulla pagina 1di 84

Linear Algebra

Section Objectives

• At the end of the lesson, the student must be able to:

• Find a basis for Rn that consists of eigenvectors of an n × n matrix A;

and

• Find (if possible) a nonsingular matrix P for a matrix A such that P−1AP

is diagonal.

The Matrix Diagonalization Problem

This is an example of similarity transformation.

A → P−1AP

This is important because it preserves many properties of the

matrix A. For example, if we let B = P−1AP, then A and B have the

same determinant since

det(B) = det(P−1AP) = det(P−1)det(A)det(P)

1

= det 𝐴 det 𝑃 = det 𝐴

det 𝑃

The Matrix Diagonalization Problem

In general, any property that is preserved by a similarity

transformation is called a similarity invariant and is said to be

invariant under similarity. Table 1 lists the most important

similarity invariants.

The Matrix Diagonalization Problem

Table 1 Similarity Invariants

Property Description

Determinant A and P−1AP have the same determinant.

Invertibility A is invertible if and only if P−1AP is invertible.

Rank A and P−1AP have the same rank.

Nullity A and P−1AP have the same nullity.

Trace A and P−1AP have the same trace.

Characteristic polynomial A and P−1AP have the same characteristic polynomial.

Eigenvalues A and P−1AP have the same eigenvalues.

Eigenspace dimension If λ is an eigenvalue of A (and hence of P−1AP) then the

eigenspace of A corresponding to λ and the eigenspace of

P−1AP corresponding to λ have the same dimension.

DEFINITION 1

If A and B are square matrices, then we say that B is similar to A if

there is an invertible matrix P such that B = P−1AP.

DEFINITION 2

A square matrix A is said to be diagonalizable if it is similar to

some diagonal matrix; that is, if there exists an invertible matrix P

such that P−1AP is diagonal. In this case the matrix P is said to

diagonalize A.

THEOREM 5.2.1

If A is an n × n matrix, the following statements are equivalent.

(a) A is diagonalizable.

(b) A has n linearly independent eigenvectors.

THEOREM 5.2.2

(a) If λ1, λ2, …, λk are distinct eigenvalues of a matrix A, and if v1,

v2, …, vk are corresponding eigenvectors, then {v1, v2, …, vk} is a

linearly independent set.

(b) An n × n matrix with n distinct eigenvalues is diagonalizable.

A Procedure for Diagonalizing an n × n Matrix

Step 1. Determine first whether the matrix is actually diagonalizable

by searching for n linearly independent eigenvectors. One way to do

this is to find a basis for each eigenspace and count the total number

of vectors obtained. If there is a total of n vectors, then the matrix is

diagonalizable, and if the total is less than n, then it is not.

Step 2. If you ascertained that the matrix is diagonalizable, then form

the matrix P = [p1 p2 … pn] whose column vectors are the n basis

vectors you obtained in Step 1.

Step 3. P−1AP will be a diagonal matrix whose successive diagonal

entries are the eigenvalues λ1, λ2, …, λn that correspond to the

successive columns of P.

Examples

1/305 Finding a Matrix P That Diagonalizes a Matrix A

Find a matrix P that diagonalizes

0 0 −2

𝐴= 1 2 1

1 0 3

Solution In Example 7 of the preceding section we found the

characteristic equation of A to be

(λ − 1)(λ − 2)2 = 0

Examples

1/305 Finding a Matrix P That Diagonalizes a Matrix A

Solution

and we found the following bases for the eigenspaces:

−1 0 −2

λ = 2: 𝐩1 = 0 , 𝐩2 = 1 ; λ = 1: 𝐩3 = 1

1 0 1

There are three basis vectors in total, so the matrix

−1 0 −2

𝑃= 0 1 1

1 0 1

diagonalizes A.

Examples

1/305 Finding a Matrix P That Diagonalizes a Matrix A

Solution

As a check, you should verify that

1 0 2 0 0 2 −1 0 −2 2 0 0

𝑃−1 𝐴𝑃 = 1 1 1 1 2 1 0 1 1 = 0 2 0

−1 0 −1 1 0 3 1 0 1 0 0 1

Examples

1/305 Finding a Matrix P That Diagonalizes a Matrix A

Solution

In general, there is no preferred order for the columns of P. Since

the ith diagonal of P−1AP is an eigenvalue for the ith column

vector of P, changing the order of the columns of P just changes

the order of the eigenvalues on the diagonal P−1AP. Thus, had we

written

−1 −2 0

𝑃= 0 1 1

1 1 0

Examples

1/305 Finding a Matrix P That Diagonalizes a Matrix A

Solution

in the preceding example, we would have obtained

2 0 0

𝑃−1 𝐴𝑃 = 0 1 0

0 0 2

Examples

2/306 A Matrix That Is Not Diagonalizable

Show that the following matrix is not diagonalizable:

1 0 0

𝐴= 1 2 0

−3 5 2

λ−1 0 0

2

det λ𝐼 − 𝐴 = −1 λ − 2 0 = λ−1 λ−2

3 −5 λ − 2

Examples

2/306 A Matrix That Is Not Diagonalizable

Solution

so the characteristic equation is

(λ − 1)(λ − 2)2 = 0

and the distinct eigenvalues of A are λ = 1 and λ = 2. We leave it for

you to show that the bases for the eigenspaces are

1Τ8 0

λ = 1: 𝐩1 = −1Τ8 ; λ = 2: 𝐩2 = 0

1 1

Since A is 3 × 3 matrix and there are only two basis vectors in total, A

is not diagonalizable.

Examples

2/306 A Matrix That Is Not Diagonalizable

Alternative Solution Finding the dimensions of the eigenspaces

For this example, the eigenspace corresponding to λ = 1 is the

solution space of the system

0 0 0 𝑥1 0

−1 −1 0 𝑥2 = 0

3 −5 −1 𝑥3 0

Since the coefficient matrix has rank 2 (verify), the nullity of this

matrix is 1 by Theorem 4.8.2, and hence the eigenspacee

corresponding to λ = 1 is one-dimensional.

Examples

2/306 A Matrix That Is Not Diagonalizable

Alternative Solution Finding the dimensions of the eigenspaces

The eigenspace corresponding to λ = 2 is the solution space of the

system

1 0 0 𝑥1 0

−1 0 0 𝑥2 = 0

3 −5 0 𝑥3 0

This coefficient matrix also has rank 2 and nullity 1 (verify), so the

eigenspace corresponding to λ = 2 is also one-dimensional. Since

the eigenspaces produce a total of two basis vectors, and since

three are needed, the matrix A is not diagonalizable.

Examples

3/307 Recognizing Diagonalizability

We saw in Example 3 of the preceding section that

0 1 0

𝐴= 0 0 1

4 −17 8

Examples

3/307 Recognizing Diagonalizability

Therefore, A is diagonalizable and

4 0 0

𝑃−1 𝐴𝑃 = 0 2+ 3 0

0 0 2− 3

using the method shown in Example 1 of this section.

Examples

4/307 Diagonalizability of Triangular Matrices

From Theorem 5.1.2, the eigenvalues of a triangular matrix are

the entries on its main diagonal. Thus, a triangular matrix with

distinct entries on the main diagonal is diagonalizable. For

example,

−1 2 4 0

0 3 1 7

𝐴=

0 0 5 8

0 0 0 −2

is a diagonalizable matrix with eigenvalues λ1 = −1, λ2 = 3, λ3 = 5,

λ4 = −2.

THEOREM 5.2.3

If k is a positive integer, λ is an eigenvalue of a matrix A, and x is a

corresponding eigenvector, then λk is an eigenvalue of Ak and x is

a corresponding eigenvector.

Examples

5/307 Eigenvalues and Eigenvectors of Matrix Powers

In Example 2 we found the eigenvalues and corresponding

eigenvectors of the matrix

1 0 0

𝐴= 1 2 0

−3 5 2

Do the same for A7.

Solution We know from Example 2 that the eigenvalues of A are λ = 1

and λ = 2, so the eigenvalues of A7 are λ = 17 = 1 and λ = 27 = 128. The

eigenvectors p1 and p2 obtained in Example 1 corresponding to the

eigenvalues λ = 1 and λ = 2 of A are also the eigenvectors

corresponding to the eigenvalues λ = 1 and λ = 128 of A7.

Computing Powers of a Matrix

λ1𝑘 0 ⋯ 0

𝐴𝑘 = 𝑃𝐷 𝑘 𝑃−1 =𝑃 0 λ𝑘2 ⋯ 0 𝑃−1 (3)

⋮ ⋮ ⋮

0 0 ⋯ λ𝑘𝑛

positive integer power has the effect of raising its eigenvalues to

that power.

Examples

6/308 Powers of a Matrix

Use (3) to find A13, where

0 0 −2

𝐴= 1 2 1

1 0 3

diagonalized by

−1 0 −2

𝑃= 0 1 1

1 0 1

Examples

6/308 Powers of a Matrix

Solution

and that

2 0 0

𝐷 = 𝑃−1 𝐴𝑃 = 0 2 0

0 0 1

Examples

6/308 Powers of a Matrix

Solution

Thus, it follows from (3) that

−1 0 −2 213 0 0 1 0 2

𝐴13 = 𝑃𝐷13 𝑃−1 = 0 1 1 0 213 0 1 1 1 (4)

1 0 1 0 0 113 −1 0 −1

−8190 0 −16382

= 8191 8192 8191

8191 0 16383

Examples

6/308 Powers of a Matrix

Remark With the method in the preceding example, most of the

work is in diagonalizing A. Once that work is done, it can be used

to compute any power of A. Thus, to compute A1000 we need only

change the exponents from 13 to 1000 in (4).

Examples

7/309 The Converse of Theorem 5.2.2(b) Is False

Consider the matrices

1 0 0 1 1 0

𝐼 = 0 1 0 and 𝐽 = 0 1 1

0 0 1 0 0 1

It follows from Theorem 5.1.2 that both of these matrices have

only one distinct eigenvalues, namely λ = 1, and hence only one

eigenspace. We leave it as an exercise for you to solve the

characteristic equations

(λI − I)x = 0 and (λI − J)x = 0

Examples

7/309 The Converse of Theorem 5.2.2(b) Is False

with λ = 1 and show that for I the eigenspace is three-dimensional

(all of R3) and for J it is one-dimensional, consisting of all scalar

multiples of

1

𝐱= 0

0

This shows that the converse of Theorem 5.2.2(b) is false, since

we have produced two 3 × 3 matrices with fewer than three

distinct eigenvalues, one of which is diagonalizable and the other

of which is not.

THEOREM 5.2.4

Geometric and Algebraic Multiplicity

the eigenspace corresponding to λ0 is called the geometric

multiplicity of λ0, and the number of times that λ − λ0 appears as

a factor in the characteristic polynomial of A is called the

algebraic multiplicity of λ0.

THEOREM 5.2.4

Geometric and Algebraic Multiplicity

If A is a square matrix, then:

(a) For every eigenvalue of A, the geometric multiplicity is less

than or equal to the algebraic multiplicity.

(b) A is diagonalizable if and only if the geometric multiplicity of

every eigenvalue is equal to the algebraic multiplicity.

Examples

1/311 Show that A and B are not similar matrices.

1 1 1 0

𝐴= ,𝐵 =

3 2 3 −2

computing P−1AP (answer is not unique).

1 0

𝐴=

6 −1

Examples

7/311 Find a matrix P that diagonalizes A, and check your work by

computing P−1AP (answer is not unique).

2 0 −2

𝐴= 0 3 0

0 0 3

Examples

9/311 Let

4 0 1

𝐴= 2 3 2

1 0 4

(a) Find the eigenvalues of A.

(b) For each eigenvalue λ, find the rank of the matrix λI − A.

(c) Is A diagonalizable? Justify your conclusion.

Examples

11/311 Find the geometric and algebraic multiplicity of each

eigenvalue of the matrix A, and determine whether A is

diagonalizable. If A is diagonalizable, then find a matrix P that

diagonalizes A, and find P−1AP.

−1 4 −2

𝐴 = −3 4 0

−3 1 3

Examples

13/311 Find the geometric and algebraic multiplicity of each

eigenvalue of the matrix A, and determine whether A is

diagonalizable. If A is diagonalizable, then find a matrix P that

diagonalizes A, and find P−1AP.

0 0 0

𝐴= 0 0 0

3 0 1

Examples

15/311 The characteristic equation of a matrix A is given. Find the

size of the matrix and the possible dimensions of its eigenspaces.

(a) (λ − 1)(λ + 3)(λ − 5) = 0

(b) λ2(λ − 1)(λ − 2)3 = 0

0 3

𝐴=

2 −1

Examples

19/311 Let

−1 7 −1 1 1 1

𝐴= 0 1 0 and 𝑃 = 0 0 1

0 15 −2 1 0 5

Examples

3 −1 0

21/311 Find An if n is a positive integer and 𝐴 = −1 2 −1 .

0 −1 3

1 1 1 3 0 0

𝐴 = 1 1 1 and 𝐵 = 0 0 0

1 1 1 0 0 0

are similar.

Examples

23/312 We know from Table 1 that similar matrices have the

same rank. Show that the converse is false by showing that the

1 0 0 1

𝐴= and 𝐵 =

0 0 0 0

have the same rank but are not similar. [Suggestion: If they were

similar, then there would be an invertible 2 × 2 matrix P for which

AP = PB. Show that there is no such matrix.]

and B is similar to C, do you think that A must be similar to C?

Justify your answer.

Examples

27/312 Suppose that the characteristic polynomial of some

matrix A is found to be p(λ) = (λ − 1)(λ − 3)2(λ − 4)3. In each aprt,

answer the question and explain your reasoning.

(a) What can you say about the dimensions of the eigenspaces of

A?

(b) What can you say about the dimensions of the eigenspaces if

you know that A is diagonalizable?

(c) If {v1, v2, v3} is a linearly independent set of eigenvectors of A,

all of which correspond to the same eigenvalue of A, what can

you say about that eigenvalue?

General Linear

Transformations

Linear Algebra

Section Objectives

• At the end of the lesson, the student must be able to:

• Determine whether a function from one vector space to another is a

linear transformation;

• Find linear transformations form images of basis vectors; and

• Find the kernel, the range, and the bases for the kernel and range of a

linear transformation T, and determine the nullity and rank of T.

DEFINITION 1

If T: V → W is a mapping from a vector space V to a vector space

W, then T is called a linear transformation from V to W if the

following two properties hold for all vectors u and v in V and for

all scalars k:

(i) T(ku) = kT(u) [Homogeneity property]

(ii) T(u + v) = T(u) + T(v) [Additivity property]

In the special case where V = W, the linear transformation T is

called a linear operator on the vector space V.

DEFINITION 1

The homogeneity and additivity properties of a linear

transformation T: V → W can be used in combination to show

that if v1 and v2 are vectors in V and k1 and k2 are any scalars,

then

T(k1v1 + k2v2) = k1T(v1) + k2T(v2)

More generally, if v1, v2, …, vr are vectors in V and k1, k2, …, kr are

any scalars, then

T(k1v1 + k2v2 + … + krvr) = k1T(v1) + k2T(v2) + … + krT(vr) (1)

THEOREM 8.1.1

If T: V → W is a linear transformation, then:

(a) T(0) = 0.

(b) T(u − v) = T(u) − T(v) for all u and v in V.

Examples

1/448 Matrix Transformations

Because we have based the definition of a general linear

transformation on the homogeneity and additivity properties of

matrix transformations, it follows that every matrix

transformation TA: Rn → Rm is also a linear transformation in this

more general sense with V = Rn and W = Rm.

Examples

2/448 The Zero Transformation

Let V and W be any two vector spaces. The mapping T: V → W

such that T(v) = 0 for every v in V is a linear transformation called

the zero transformation. To see that T is linear, observe that

Therefore,

T(u + v) = T(u) + T(v) and T(ku) = kT(u)

Examples

3/448 The Identity Operator

Let V be any vector space. The mapping I: V → V defined by I(v) =

v is called the identity operator on V. We will leave it for you to

verify that I is linear.

Examples

4/449 Dilation and Contraction Operators

If V is a vector space and k is any scalar, then the mapping T: V →

V given by T(x) = kx is a linear operator on V, for if c is any scalar

and if u and v are any vectors in V, then

T(cu) = k(cu) = c(ku) = cT(u)

T(u + v) = k(u + v) = ku + kv = T(u) + T(v)

if k > 1, it is called the dilation of V with factor k.

Examples

5/449 A Linear Transformation from Pn to Pn+1

Let p = p(x) = c0 + c1x + … + cnxn be a polynomial in Pn, and define

the transformation T: Pn → Pn+1 by

T(p) = T(p(x)) = xp(x) = c0x + c1x2 + … + cnxn+1

This transformation is linear because for any scalar k and

polynomials p1 and p2 in Pn we have

T(kp) = T(kp(x)) = x(kp(x)) = k(xp(x)) = kT(p)

and

T(p1 + p2) = T(p1(x) + p2(x)) = x(p1(x) + p2(x))

= xp1(x) + xp2(x) = T(p1) + T(p2)

Examples

6/449 A Linear Transformation Using the Dot Product

Let v0 be any fixed vector in Rn, and let T: Rn → R be the

transformation

𝑇 𝐱 = 𝐱 ⋅ 𝐯0

that maps a vector x to its dot product with v0. This

transformation is linear, for if k is any scalar, and if u and v are any

vectors in Rn, then it follows from properties of the dot product in

Theorem 3.2.2 that

T(ku) = (ku) ∙ v0 = k(u ∙ v0) = kT(u)

T(u + v) = (u + v) ∙ v0 = (u ∙ v0) + (v ∙ v0) = T(u) + T(v)

Examples

7/449 Transformations on Matrix Spaces

Let Mnn be the vector space of n × n matrices. In each part

determine whether the transformation is linear.

(a) T1(A) = AT

Solution (a) It follows from parts (b) and (d) of Theorem 1.4.8

that

T1(kA) = (kA)T = kAT = kT1(A)

T1(A + B) = (A + B)T = AT + BT = T1(A) + T1(B)

so T1 is linear.

Examples

7/449 Transformations on Matrix Spaces

Let Mnn be the vector space of n × n matrices. In each part

determine whether the transformation is linear.

(a) T2(A) = det(A)

Solution (b) It follows from Formula (1) of Section 2.3 that

T2(kA) = det(kA) = kndet(A) = knT2(A)

Thus, T2 is not homogeneous and hence not linear if n > 1. Note

that additivity also fails because we showed in Example 1 of

Section 2.3 that det(A + B) and det(A) + det(B) are not generally

equal.

Examples

8/450 Translation Is Not Linear

Part (a) of Theorem 8.1.1 states that a linear transformation maps

0 to 0. This property is useful for identifying transformations that

are not linear. For example, if x0 is a fixed nonzero vector in R2,

then the transformation

T(x) = x + x0

has the geometric effect of translating each point x in a direction

parallel to x0 through a distance of ‖x0‖ (Figure 8.1.1). This cannot

be a linear transformation since T(0) = x0, so T does not map 0 to

0.

Examples

8/450 Translation Is Not Linear

THEOREM 8.1.2

Let T: V → W be a linear transformation, where V is finite-

dimensional. If S = {v1, v2, …, vn} is a basis for V, then the image of

any vector v in V can be expressed as

T(v) = c1T(v1) + c2T(v2) + … + cnT(vn) (3)

where c1, c2, …, cn are the coefficients required to express v as a

linear combination of the vectors in the basis S.

Examples

10/451 Computing with Images of Basis Vectors

Consider the basis S = {v1, v2, v3} for R3, where

v1 = (1, 1, 1), v2 = (1, 1, 0), v3 = (1, 0, 0)

Let T: R3 → R2 be the linear transformation for which

T(v1) = (1, 0), T(v2) = (2, −1), T(v3) = (4,3)

Find a formula for T(x1, x2, x3), and then use that formula to

compute T(2, −3, 5).

Examples

10/451 Computing with Images of Basis Vectors

Solution We first need to express x = (x1, x2, x3) as a linear

combination of v1, v2, and v3. If we write

(x1, x2, x3) = c1(1, 1, 1) + c2(1, 1, 0) + c3(1, 0, 0)

then on equating corresponding components, we obtain

c1 + c2 + c3 = x1

c1 + c2 = x2

c1 = x3

which yields c1 = x3, c2 = x2 − x3, c3 = x1 − x2, so

Examples

10/451 Computing with Images of Basis Vectors

Solution

(x1, x2, x3) = x3(1, 1, 1) + (x2 − x3)(1, 1, 0) + (x1 − x2)(1, 1, 0)

= x3v1 + (x2 − x3)v2 + (x1 − x2)v3

Thus,

T(x1, x2, x3) = x3T(v1) + (x2 − x3)T(v2) + (x1 − x2)T(v3)

= x3(1, 0) + (x2 − x3)(2, −1) + (x1 − x2)(4, 3)

= (4x1 − 2x2 − x3, 3x1 − 4x2 + x3)

From this formula we obtain

T(2, −3, 5) = (9, 23)

DEFINITION 2

If T: V → W is a linear transformation, then the set of vectors in V

that T maps into 0 is called the kernel of T and is denoted by

ker(T). The set of all vectors in W that are images under T of at

least one vector V is called the range of T and is denoted by R(T).

Examples

13/452 Kernel and Range of a Matrix Transformation

If TA: Rn → Rm is multiplication by the m × n matrix A, then, as

discussed above, the kernel of TA is the null space of A, and the

range of TA is the column space of A.

Let T: V → W be the zero transformation. Since T maps every

vector in V into 0, it follows that ker(T) = V. Moreover, since 0 is

the only image under T of vectors in V, it follows that R(T) = {0}.

Examples

15/452 Kernel and Range of the Identity Operator

Let I: V → V be the identity operator. Since I(v) = v for all vectors

in V, every vector V is the image of some vector (namely, itself);

thus R(I) = V. Since the only vector I maps into 0 is 0, it follows

that ker(I) = {0}.

Examples

16/452 Kernel and Range of an Orthogonal Projection

Let T: R3 → R3 be the orthogonal projection onto xy-plane. As

illustrated in Figure 8.1.2a, the points that T maps into 0 = (0, 0,

0) are precisely those on the z-axis, so ker(T) is the set of points of

the form (0, 0, z). As illustrated in Figure 8.1.2b, T maps the points

in R3 to the xy-plane, where each point in that plane is the image

of each point on the vertical line above it. Thus, R(T) is the set of

points of the form (x, y, 0).

Examples

16/452 Kernel and Range of an Orthogonal Projection

Examples

17/453 Kernel and Range of a Rotation

Let T: R2 → R2 be the linear operator that rotates each vector in

the xy-plane through the angle θ (Figure 8.1.3). Since every vector

in the xy-plane can be obtained by rotating some vector through

the angle θ, it follows that R(T) = R2. Moreover, the only vector

that rotates into 0 is 0, so ker(T) = {0}.

Examples

17/453 Kernel and Range of a Rotation

THEOREM 8.1.3

If T: V → W is a linear transformation, then:

(a) The kernel of T is a subspace of V.

(b) The range of T is a subspace of W.

DEFINITION 3

Let T: V → W be a linear transformation. If the range of T is finite-

dimensional, then its dimension is called the rank of T; and if the

kernel of T is finite-dimensional, then its dimension is called the

nullity of T. The rank of T is denoted by rank(T) and the nullity of

T by nullity(T).

THEOREM 8.1.4

Dimension Theorem for Linear Transformations

If T: V → W is a linear transformation from a finite-dimensional

vector space V to a vector space W, then the range of T is finite-

dimensional, and

rank(T) + nullity(T) = dim(V) (7)

THEOREM 8.1.4

In the special case where A is an m × n matrix and TA: Rn → Rm is

multiplication by A, the kernel of TA is the null space of A, and the

range of TA is the column space of A. Thus, it follows from

Theorem 8.1.4 that

rank(TA) + nullity(TA) = n

Assuming that V is n-dimensional,

dim(R(T)) + dim(ker(T)) = n

Examples

1/456 Suppose that T is a mapping whose domain is the vector

space M22. In each part, determine whether T is a linear

transformation, and if so, find its kernel.

(a) T(A) = A2 (b) T(A) = tr(A) (c) T(A) = A + AT

transformation, and if so, find its kernel.

T: R3 → R, where T(u) = ‖u‖.

Examples

5/456 Determine whether the mapping T is a linear

transformation, and if so, find its kernel.

T: M22 → M23, where B is a fixed 2 × 3 matrix and T(A) = AB.

transformation, and if so, find its kernel.

T: P2 → P2, where

(a) T(a0 + a1x +a2x2) = a0 + a1(x + 1) + a2(x + 1)2

(b) T(a0 + a1x +a2x2) = (a0 + 1) + (a1 + 1)x + (a2 + 1)x2

Examples

9/456 Determine whether the mapping T is a linear

transformation, and if so, find its kernel.

T: R∞ → R∞, where

T(a0, a1, a2, …, an, …) = (0, a0, a1, a2, …, an, …)

Examples

10/456 Let T: P2 → P3 be the linear transformation defined by

T(p(x)) = xp(x). Which of the following are in ker(T)?

(a) x2 (b) 0 (c) 1 + x (d) −x

Which of the following are in R(T)?

(a) x + x2 (b) 1 + x (c) 3 − x2 (d) −x

Examples

13/456 In each part, use the given information to find the nullity

of the linear transformation T.

(a) T: R5 → P5 has rank 3.

(b) T: P4 → P3 has rank 1.

3.

1 2

(a) Find 𝑇 . (b) Find the rank and nullity of T.

−4 3

Examples

17/456 Let T: P2 → R3 be the evaluation transformation at the

sequence of points −1, 0, 1. Find

(a) T(x2) (b) ker(T) (c) R(T)

19/456 Consider the basis S = {v1, v2} for R2, where v1 = (1, 1) and

v2 = (1, 0), and let T: R2 → R2 be the linear operator for which

T(v1) = (1, −2) and T(v2) = (−4, 1)

Find a formula for T(x1, x2), and use that formula to find T(5, −3).

Examples

21/456 Consider the basis S = {v1, v2, v3} for R3, where v1 = (1, 1,

1), v2 = (1, 1, 0), and v3 = (1, 0, 0), and let T: R3 → R3 be the linear

operator for which

T(v1) = (2, −1, 4), T(v2) = (3, 0, 1), T(v3) = (−1, 5, 1)

Find a formula for T(x1, x2, x3), and use that formula to find T(2, 4,

−1).

Examples

23/457 Let T: P3 → P2 be the mapping defined by

T(a0 + a1x + a2x2 + a3x3) = 5a0 + a3x2

(a) Show that T is linear.

(b) Find a basis for the kernel of T.

(c) Find a basis for the range of T.

Examples

29/457 (a) Let T: V → R3 be a linear transformation from a vector

space V to R3. Geometrically, what are the possibilities for the

range of T?

space W. Geometrically, what are the possibilities for the kernel

of T?

Examples

31/457 Let v1, v2, and v3 be vectors in a space V, and let T: V → R3

be a linear transformation for which

T(v1) = (1, −1, 2), T(v2) = (0, 3, 2), T(v3) = (−3, 1, 2)

Find T(2v1 − 3v2 + 4v3).

33/457 Let {v1, v2, …, vn} be a basis for a vector space V, and let T:

V → V be a linear operator. Prove that if

T(v1) = v1, T(v2) = v2, …, T(vn) = vn

then T is the identity transformation on V.

References

• Anton and Rorres. Elementary Linear Algebra, 11th Ed. © 2014

• Larson and Falvo. Elementary Linear Algebra, 6th Ed. © 2009