Sei sulla pagina 1di 10

Dierential Equations (part 3): Systems of First-Order Dierential Equations

(by Evan Dummit, 2012, v. 1.00)

Contents
1 Systems of First-Order Linear Dierential Equations

1.1

General Theory of (First-Order) Linear Systems

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Eigenvalue Method (for diagonalizable coecient matrices)

1.3

Eigenvalue Method (for general matrices)

1.4

[Outline] Matrix Exponentials (for general coecient matrices)

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

1 Systems of First-Order Linear Dierential Equations

In many (perhaps most) applications of dierential equations, we have not one but several quantities which
change over time and interact with one another.

Example: The populations of the various species in an ecosystem.

Example: The concentrations of molecules involved in a chemical reaction.

Example: The production of goods, availability of labor, prices of supplies, and many other quantities
over time in economic processes.

We would like to develop methods for solving systems of dierential equations.

Alas, as is typical with dierential equations, we cannot solve arbitrary systems in full generality: in fact
it is very dicult even to solve individual nonlinear dierential equations, let alone a system of nonlinear
equations.

The most we will be able to do in general is to solve systems of linear equations with constant coecients,
and give an existence-uniqueness theorem for general systems of linear equations.

1.1 General Theory of (First-Order) Linear Systems

Before we start our discussion of systems of linear dierential equations, we rst observe that we can reduce
any system of linear dierential equations to a system of rst-order linear dierential equations (in more
variables): if we dene new variables equal to the higher-order derivatives of our old variables, then we can
rewrite the old system as a system of rst-order equations (in more variables).

Example: Consider the single 3rd-order equation

If we dene new variables


so

w =y

000

z = y0

then the original equation tells us that

y 000 = y 0 ,

= y = z .

Thus, this single 3rd-order equation is equivalent to the rst-order system

y100 + y1 y2 = 0

z1 = y10
sin(x) = e z1 z2 sin(x)

If we dene new variables

y20

w = y 00 = z 0 ,

Example: Consider the system

and

y 000 + y 0 = 0.

and

and

z2 = y20 ,

y200 + y10 + y20 sin(x) = ex .


then

z10 = y100 = y1 + y2

So this system is equivalent to the rst-order system

y 0 = z , z 0 = w, w0 = z .

and

z20 = y200 = ex y10

y10 = z1 , y20 = z2 , z10 = y1 + y2 , z20 =

e z1 z2 sin(x).

Thus, whatever we can show about solutions of systems of rst-order linear equations, will carry over to
arbitrary systems of linear dierential equations.
dierential equations from now on.

So we will talk only about systems of rst-order linear

A system of rst-order linear dierential equations (with unknown functions

y10

a1,1 (x) y1 + a1,2 (x) y2 + + a1,n (x) yn + p1 (x)

y20

a2,1 (x) y1 + a2,2 (x) y2 + + a2,n (x) yn + p2 (x)

.
.
.

ai,j (x)

has the general form

.
.
.

yn0
for some functions

y1 , , yn )

an,1 (x) y1 + an,2 (x) y2 + + an,n (x) yn + pn (x)


pj (x),

and

where

1 i, j n.

Most of our time we will be dealing with systems with constant coecients, in which all of the

ai,j (x)

are constant functions.

We say a rst-order system is homogeneous if each of

An initial condition for this system consists of

yn (x0 ) = bn ,

x0

is the starting value for

nth

n pieces of information: y1 (x0 ) = b1 , y2 (x0 ) = b2 , . . .


x and the bi are constants.

order linear equations.

Theorem (Homogeneous Systems): If the coecient functions

(y1 , y2 , , yn )

are continuous, then the set of solutions

y10

= a1,1 (x) y1 + a1,2 (x) y2 + + a1,n (x) yn

y20

= a2,1 (x) y1 + a2,2 (x) y2 + + a2,n (x) yn

yn0
is an

ai,j (x)

to the homogeneous system

.
.
.

n-dimensional

.
.
.

an,1 (x) y1 + an,2 (x) y2 + + an,n (x) yn

vector space.

The fact that the set of solutions forms a vector space is not so hard to show using the subspace criteria.

The real result of this theorem, which follows from the existence-uniqueness theorem below, is that the
set of solutions is

is zero.

Many of the theorems about general systems of rst-order linear equations are very similar to the theorems
about

where

p1 (x), p2 (x), , pn (x)

n-dimensional.

Theorem (Existence-Uniqueness): For a system of rst-order linear dierential equations, if the coecient
functions

ai,j (x)

and nonhomogeneous terms

pj (x)

are each continuous in an interval around

x = x0 ,

then

the system

y10

a1,1 (x) y1 + a1,2 (x) y2 + + a1,n (x) yn + p1 (x)

y20

a2,1 (x) y1 + a2,2 (x) y2 + + a2,n (x) yn + p2 (x)

.
.
.

yn0
with initial conditions

.
.
.

an,1 (x) y1 + an,2 (x) y2 + + an,n (x) yn + pn (x)

y1 (x0 ) = b1 , . . . , yn (x0 ) = bn
x = x0 .

has a unique solution

(y1 , y2 , , yn )

in some (possibly

smaller) interval around

Example: The system

y 0 = ex y + sin(x)y , z 0 = 3x2 y

has a unique solution for every initial condition

y(a) = b1 , z(a) = b2 .

Denition: Given

with functions as

The

n vectors s1 = (y1,1 , y1,2 , , y1,n ), s2 = (y2,1 , y2,2 , , y2,n ), , sn = (yn,1 , yn,2 , , yn,n )
y1,1 y1,2 y1,n


y2,1 y2,2 y2,n


entries, their Wronskian is dened as the determinant W =
.
.
.
.
..
.
.

..
.
.
.


yn,1 yn,2 yn,n

vectors

s1 , , sn

are linearly independent if their Wronskian is nonzero.

1.2 Eigenvalue Method (for diagonalizable coecient matrices)

We now restrict our discussion to homogeneous rst-order systems with constant coecients: those of the
form

y10

a1,1 y1 + a1,2 y2 + + a1,n yn

y20

a2,1 y1 + a2,2 y2 + + a2,n yn

.
.
.

yn0

.
.
.

an,1 y1 + an,2 y2 + + an,n yn

a1,1
y1
a2,1
y2

We can rewrite this system in matrix form as ~y 0 = A~y , where ~y = . and A = .


..
..
yn
an,1

.
.
.

..

an,2

a1,n
a2,n

.
.
.

.
an,n

The idea behind the so-called Eigenvalue Method is the following observation:

Observation: If

to

a1,2
a2,2

c1
c2

~v = .
..
cn

Theorem:

If

has

~y = ex ~v

c1 , , cn

then

is a solution

gives

linearly independent eigenvectors

~y 0 = A ~y

~y 0 = ex~v = ~y = A ~y .

~v1 , , ~vn with eigenvalues 1 , , n , then the


~y = c1 e1 x~v1 + c2 e2 x~v2 + + cn en x~vn ,

are given by

are arbitrary constants.

A has n linearly independent eigenvectors ~v1 , , ~vn with eigenA is diagonalizable with A = P 1 DP where the
1 , , n and the columns of P are the vectors ~v1 , , ~vn .

Important Remark: The statement that


values

1 , , n

is equivalent to the statement that

diagonal elements of

with respect to

solutions to the matrix dierential system

with eigenvalue

~y 0 = A ~y .

Proof: Dierentiating

where

is an eigenvector of

c1
c2

~y = . ex
..
cn

are

Proof: By the observation above, each of

e1 x~v1 , e2 x~v2 , , en x~vn

is a solution to

~y 0 = A ~y .

We claim

that they are a basis for the solution space.

We can compute the Wronskian of these solutions; after factoring out the exponentials from each

|
|
|
( ++n )x
column, we obtain W = e 1
~v1 ~vn . Then this product
|
|
|
exponential is nonzero and the vectors ~
v1 , , ~vn are linearly independent.
, en x~vn are linearly independent.

Hence

e1 x~v1 , e2 x~v2 ,

We also know by the existence-uniqueness theorem that the set of solutions to the system
is

is nonzero because the

~y 0 = A ~y

n-dimensional.

So since we have

linearly independent elements

e1 x~v1 , e2 x~v2 , , en x~vn

in an

n-dimensional

vector space, they are a basis.

Finally, since these solutions are a basis, all solutions are of the form

cn e

n x

~vn ,

where

c1 , , cn

~y = c1 e1 x~v1 + c2 e2 x~v2 + +

are arbitrary constants.

By the remark, the theorem allows us to solve all homogeneous systems of linear dierential equations whose
coecient matrix

is diagonalizable. To do this, follow these steps:

Step 1: Write the system in the form

~y 0 = A ~y

for an

n1

column matrix

~y

and an

nn

matrix

(if

the system is not already in this form). If the system has equations which are not rst order, introduce
new variables to make the system rst-order.

Step 2: Find the eigenvalues and eigenvectors of


izable, generate the list of

1 , , n .

A,

A is diagonalizable. If A is diagonal~v1 , , ~vn with corresponding eigenvalues

and check that

linearly independent eigenvectors

If there are complex-conjugate eigenvalues


conjugates of those for

and

Step 3: Write down the general solution to the system:

c1 , , cn

then the eigenvectors for

are the complex

.
~y = c1 e1 x~v1 + c2 e2 x~v2 + + cn en x~vn ,

where

are arbitrary constants.

Note: If there are complex-conjugate eigenvalues then we generally want to write the solutions as
real-valued functions.

To do this, we take a linear combination: if

= a bi

~v = w
~ 1 iw
~2

has an eigenvector

= a + bi

has an eigenvector

(the conjugate of

~v = w
~ 1 + iw
~2

so that

~v ).

ex~v and
e ~v with the two real-valued solutions e (w
~ 1 cos(bx) w
~ 2 sin(bx)) and e (w
~ 1 sin(bx) + w
~ 2 cos(bx)).

Then to obtain real-valued solutions to the system, replace the two complex-valued solutions

ax

ax

Step 4 (if necessary): Plug in any initial conditions and solve for

y10
y20

c1 , , cn .

y1 3y2
.
y1 + 5y2



y1
1 3
and A =
Step 1: The system is ~y 0 = A ~y , with ~y 0 =
.
y2
1 5


t1
3
= (t1)(t5)+3 = t2 6t+8,
Step 2: The characteristic polynomial of A is det(tI A) =
1 t 5
so the eigenvalues are = 2, 4.

 
 


 

1 3
a
2a
a 3b
2a
For = 2 we want

=
so that
=
. This yields a = 3b,
1 5
b
2b
a + 5b
2b


3
so
is an eigenvector.
1

 
 


 

1 3
a
4a
a 3b
4a
For = 2 we want

=
so that
=
. This yields a = b,
1 5
b
4b
a + 5b
4b


1
so
is an eigenvector.
1








y1
3c1 e2x + c2 e4x
3
1
Step 3: The general solution is
.
= c1
e2x + c2
e4x =
c2 e2x + c2 e4x
y2
1
1

Example: Find all functions

Example: Find all real-valued functions

Step 1:

Step 2:

y1

y2

and

such that

=
=


y10
y20

= y2
.
= y1




y1
0 1
0
0
The system is ~
y = A ~y , with ~y =
and A =
.
y2
1 0


t 1
2

The characteristic polynomial of A is det(tI A) =
1 t = t + 1,
y1

and

y2

such that

so the eigenvalues are

= i.


For

=i

For

= i

we want

0
1

1
0

 
 

a
ia

=
b
ib


so

b = ia

and thus

1
i


is an eigenvector.


we can take the complex conjugate of the eigenvector for

=i

to see that

1
i


is an

eigenvector.

Step 3: The general solution is

y1
y2


= c1

1
i


e

ix


+ c2

1
i

eix .


But we want real-valued solutions, so we need to replace the complex-valued solutions


and

1
i

eix

with real-valued ones.

We have

=i

and

~v =

1
0


+

0
1


i

so that

w
~1 =

1
0


and

w
~2 =

0
1


.

1
i

eix

Plugging into the formula in the note gives us the equivalent real-valued solutions

0
1

cos(x)
sin(x)

sin(x) =


and

1
0


sin(x)


This gives the solution to the system as

y1
y2

0
1


= c1


cos(x) =

cos(x)
sin(x)



+c2

sin(x)
cos(x)

sin(x)
cos(x)

1
0


cos(x) +


=

c1 cos(x) + c2 sin(x)
c1 sin(x) c2 cos(x)

1.3 Eigenvalue Method (for general matrices)

We would like to be able to solve systems

~y 0 = A ~y

where

linearly-independent eigenvectors (equivalently: for which

is an

nn

matrix which does not have

is not diagonalizable).

~v is an eigenvector with eigenvalue , then ~y = ~v et

Recall that if

By the existence-uniqueness theorem we know that the system

is a solution to the dierential equation.

~y 0 = A ~y

has an

n-dimensional

solution

space.

So if

has

linearly-independent eigenvectors, then we can write down the general solution to the

system directly.

If

has fewer than

linearly-independent eigenvectors, we are still missing some of the solutions, and

need to construct the missing ones.

We do this via generalized eigenvectors: vectors which are not eigenvectors, but are close enough that

~y 0 = A ~y .

we can use them to write down more solutions to the system

If

is a root of the
has dimension

for

characteristic equation
less than

k,

k times, we say that


is defective.

Here is the procedure to follow, to solve a system

Step 1: Find the eigenvalues of the


Step 2: For each eigenvalue
nd a basis

~v1 , , ~vk

nn

~y 0 = A ~y

matrix

If the eigenspace

which may be defective:

A.

(appearing as a root of the characteristic polynomial with multiplicity

n),

-eigenspace.

coming from this eigenspace are

Step 2b: If the eigenspace is defective (in


the basis

k = n), then the solutions to the system


et~v1 , , et~vn .
other words, if k < n), then above each eigenvector w
~ in

~v1 , , ~vk ,

look for a chain of generalized eigenvectors satisfying

(A I) w
~2

= w
~

(A I) w
~3

= w
~2

.
.
.

(A I) w
~l

.
.
.

.
.
.

= w
~ l1 .

Then the solutions to the system for the eigenspace are

, et

k.

Step 2a: If the eigenspace is not defective (in other words, if

~y 0 = A ~y

for the

has multiplicity

we say that

tl1
~
l! w

Note: If the

n,

where

tl2
~2
(l2)! w

-eigenspace

+ + tw
~ l1 + w
~l

~ + tw
~2 + w
~ 3 ],
et [w]
~ , et [tw
~ +w
~ 2 ], et [ t2 w

is 1-dimensional, then there is only 1 chain, and it will always be of length

is the multiplicity of

If the

-eigenspace

has dimension

> 1,

then the chains may have

dierent lengths, and it may be necessary to toss out some elements of some chains, as they may
lead to linearly dependent solutions.

~y1 , , ~yn are the n solution functions obtained


~y 0 = A ~y is ~y = c1 ~y1 + c2 ~y2 + + cn ~yn .

Step 3: If
system

in step 2, then the general solution to the

If there are complex-conjugate eigenvalues then we generally want to write the solutions as realvalued functions. To obtain real-valued solutions to the system from a pair of complex-conjugate
solutions

and

y,

replace

and

y with Re(y)

and

Im(y),

the real and imaginary parts of

y.


.

y10
y20

= 5y1 9y2
= 4y1 7y2


5 9
A=
.
4 7

Example: Find the general solution to the system

~y 0 = A ~y , where


5t
9
2
2
We have AtI =
so det(AtI) = (5t)(7t)(4)(9) = 1+2t+t = (t+1) .
4
7 t
Thus there is a double eigenvalue = 1.

 
  
6 9
a
0
=
, so that 2a 3b = 0.
To compute the eigenvectors for = 1, we want

4 6
b
0


a
, and the eigenspace is 1-dimensional with a basis given
So the eigenvectors are of the form
2
3a
 
3
by
.
2
Step 2: There is only one eigenvector (the triple eigenvector = 1), so we need to compute a chain of

Step 1: In matrix form this is

generalized eigenvectors to nd the remaining solution to the system.

We start with

We want to nd




3
6 9
, and also have A I =
.
2
4 6



 
  
a
6 9
a
3
.
w
~2 =
with

=
b
4 6
b
2

w
~=

Dividing the rst row by 3 and the second row by 2 yields the system
we want

2a 3b = 1,

so (for example) we can take

This gives our choice of

this eigenspace: they are

and

and so

b = 1.


.

3
2


e


and

Step 3: We thus obtain the general solution

a=2



3 1
,
3 1

Now we have the chain of the proper length (namely 2), so we can write down the two solutions for

w
~2 =

2
1

2
2

Slightly more explicitly, this is

y10
y20


2
te +
et .
1


 



 

y1
3
3
2
t
t
t
= c1
e
+ c2
te +
e
y2
2
2
1
3
2

= (3c1 + 2c2 + 3c2 t)et


= (2c1 + c2 + 2c2 t)et
y10
y20
y30

= 4y1 y2 2y3
= 2y1 + y2 2y3 .
= 5y1
3y3

4 1 2
Step 1: In matrix form this is ~y 0 = A ~y , where A = 2 1 2 .
5 0 3





4 t 1
2
1 2
4 t 1



=

2
1t
2
We have AtI =
so det(AtI) = 5
1 t 2 +(3t) 2
1t
5
0
3 t
2 t + 2t2 t3 = (2 t)(1 + t2 ).
Thus the eigenvalues are = 2, i, i.

2 1 2
a
0
= 2: For = 2, we want 2 1 2 b = 0 , so that 2ab2c = 0 and 5a5c = 0.
5 0 5
c
0

a
Hence c = a and b = 2a 2c = 0, so the eigenvectors are of the form 0 . So the eigenspace is
a

1
1-dimensional, and has a basis given by 0 .
1

Example: Find the general solution to the system


= i:

For

= i,

we want

2
a
0
2 b = 0 . Subtracting the rst row
3 i
c
0

4 i 1
2

1
0
the second row by 2 i yields 1
5
0 3 i

4 i 1
2
1i
5
0

from the second row, and then dividing


a
0
b = 0 .
c
0

Hence

a + b = 0

so

b = a;

then the third row gives

5a + (3 i)c = 0,

c =

5(3 i)
5
a =
a.
3+i
10

So the eigenvectors are of the form

1-dimensional, and has a basis given by

=i: For = i
2
2 .
by
3+i

2
2 .
3i

.
3i
a
2

or

we can just take the conjugate of the eigenvectors for

So the eigenspace is

= i,

so a basis is given

Step 2: The eigenspaces are all the proper sizes, so we do not need to compute any generalized eigenvectors.

2
2
y1
1
Step 3: The general solution is y2 = c1 0 e2t + c2 2 eit + c3 2 eit .
3+i
3i
y3
1

2 sin(t)
2 cos(t)
y1
1

+ c3
2 sin(t)
2 cos(t)
With real-valued functions: y2 = c1 0 e2t + c2
cos(t) + 3 sin(t)
3 cos(t) + sin(t)
1
y3

Slightly more explicitly, this is

y10
y20
y30

=
c1 e2t + 2c2 cos(t) + 2c3 sin(t)
=
2c2 cos(t) + 2c3 sin(t)
= c1 e2t + c2 (3 cos(t) + sin(t)) + c3 ( cos(t) + 3 sin(t))

Example: Find the general solution to the system

y10
y20
y30

=
=
=

Step 1: In matrix form this is

~y 0 = A ~y ,

where

4y1
y3
2y1 + 2y2 y3
3y1 + y2

4
A= 2
3

0
2
1

1
1 .
0


4t
0
1
2 t 1

2
2 t 1 so det(AtI) = (4t)
We have AtI =
1
t
3
1
t
8 12t + 6t2 t3 = (2 t)3 .
Thus there is a triple eigenvalue = 2.



2 0 1
a
To compute the eigenvectors for = 2, we want 2 0 1 b =
3 1 2
c





+(1) 2

3

0
0 ,
0

so that


2 t
=
1

2a c = 0

and

3a + b 2c = 0.

Hence

c = 2a

and

b = 2c 3a = a,

so the eigenvectors are of the form

So the eigenspace is 1-dimensional, and has a basis given by

1
1 .
2

a
a .
2a

Step 2: There is only one eigenvector (the triple eigenvector

= 2),

so we need to compute a chain of

generalized eigenvectors to nd the remaining 2 solutions to the system.

1
w
~ = 1 ,
2

2 0 1
0 1 .
We start with
and also have A I = 2
3 1 2



a
2 0 1
a
First we want to nd w
~ 2 = b with 2 0 1 b =
c
3 1 2
c

2
The corresponding system of equations in matrix form is 2
3

1
1 .
2
0
0
1

1
1
2

1

1 ,

2

which we now

row-reduce:

0 1 1
0 0 0
1 0 0

1
and hence a + b = 0 and 2a c = 1, so one possibility for w
~ 2 is w
~ 2 = 1 .
1

2 0 1
d
1
d
Now we want to nd w
~ 3 = e with 2 0 1 e = 1 .
3 1 2
f
1
f

2 0 1 1
The corresponding system of equations in matrix form is 2 0 1 1 ; we can
3 1 2 1

2
2
3

0
0
1

1
1
2

1
2

2 R1
1 R
0

2
3

0
0
1

1
0
2

1
2

2R1
0 R3
0

2
1

use the

same row-reduction as above to obtain

2
2
3
and hence

0
0
1

1
1
2

1
2 0

2 R1
1 R
0 0

1
3 1

a + b = 1

and

2a c = 1,

1
0
2

1
2

2R1
0 R3
0

1
1

so one possibility for

w
~3

is

0 1 1
0 0 0
1 0 1

1
w
~ 3 = 0 .
1

Now we have the chain of the proper length (namely 3), so we can write down the three solutions for

1
1
1
1
1
1
2
t
2t
2t
2t
e2t + 1 te2t + 0 e2t .
they are 1 e , 1 te + 1 e , and 1
2
1
1
2
2
2
1

this eigenspace:

Step 3: We thus obtain the general solution as the (rather unwieldy and complicated) expression

y1
1
1
1
1
1
1
2
t
y2 = c1 1 e2t + c2 1 te2t + 1 e2t + c3 1 e2t + 1 te2t + 0 e2t
2
y3
1
1
2
2
1
2

Slightly more explicitly, this is

y10
y20
y30

= ( c1 + c2 + c3 + c2 t + c3 t + 12 c3 t2 )e2t
= ( c1 + c2 +
c2 t + c3 t + 21 c3 t2 )e2t
= (2c1 + c2 + c3 + 2c2 t + c3 t + c3 t2 )e2t

1.4 [Outline] Matrix Exponentials (for general coecient matrices)

If the coecient matrix is not diagonalizable, life is more dicult, as we cannot generate a basis for the
solution space using eigenvectors alone.

We can still solve the system using chains of generalized eigenvectors. However, there are some slightly
cumbersome technical problems that can occur when any defective eigenspace has more than one independent eigenvector: in order to generate enough solutions, in general one must construct a chain above

each element of a basis for a defective eigenspace.

But some of the generalized eigenvectors obtained

from these chains may yield linearly dependent solution functions.

If we consider the  1 1 system


is

Thus, we would like to know if there is another method.

y 0 = ky

with the initial condition

y(0) = C

we know that the general solution

kx

y(x) = e C .

We would like to nd some way to extend this result to

This leads us to dene the exponential of a matrix

Denition: If

eA =

X
An

n=0

n!

nn

is an

systems.

matrix, then we dene the exponential of

A,

denoted

eA ,

to be the innite sum

The denition is motivated by the Taylor series for the exponential of a real or complex number
namely,

ez =

n
X
z

n=0

nn

Remark:

n!

z;

In order for this denition to make sense, we need to know that the innite sum actually

converges, which is not entirely trivial to prove.

Theorem: The innite sum

X
An

eA =

n=0

Theorem: If

~y (0) = ~y0

is an

n n matrix, then
~y (x) = eAx ~y0 .

converges for every matrix

A eAx ,

A.

the unique solution to the initial value problem

The proof of this result follows from showing that the derivative
is

is given by

n!

d Ax
[e ]
dx

~y 0 = A ~y

with

of the matrix exponential

eAx

which can be done by dierentiating the power series dening the matrix exponential.

Therefore, we see that

~y (x) = eAx ~y0

is a solution to the initial value problem (since it satises the

dierential equation and the initial condition). The uniqueness part of the existence-uniqueness theorem
guarantees it is the only solution.

So we see that the matrix exponential allows us to solve any homogeneous rst-order linear system. All that
remains is actually to compute the exponential of a matrix. In general, this is not so straightforward: the
general computation requires the Jordan Canonical Form. In the special case where

is diagonalizable, the

computation is simpler.

Proposition: For any invertible matrix

Proof:

eP

fact that

AP

n!
n=0
1
n
(P AP ) = P (An )P .

Proposition: If

 
= P 1 eA P .
!

X
 
An
P = P 1 eA P ,
n!
n=0

= P 1

e , ,e

AP

1 , , n ,

where the middle step uses the

then

eD

is the diagonal matrix

Proof: This is a computation from the denition of the matrix exponential.


Putting these two results together shows

x
1
e 1

..

, then eAx = P 1
.
n


0 2
Ax
.
Example: Find e , if A =
3 5

is a diagonal matrix with diagonal entries

with diagonal entries

X
(P 1 AP )n

P , eP

that if

First we (attempt to) diagonalize the matrix


so the eigenvalues are

= 2, 3.

is diagonalizable  i.e.,

A = P 1 DP

where

D =

..

en x

A.

P.

We calculate

det(tI A) = t(t 5) + 6 = (t 2)(t 3)

 




 

2
a
a
2b
2a

=2
, so
=
and thus a = b.
b
3a + 5b
2b
5
 b

b
1
The eigenvectors are of the form
so a basis for the = 2 eigenspace is
.
b
1






 

2
0 2
a
a
2b
3a
For = 3 we need to solve

=3
, so
=
and thus a = b.
3 5
b
b
3a + 5b
3b
3
"
#


2
2
b
The eigenvectors are of the form
so a basis for the = 3 eigenspace is
.
3
3
b
1
Sincethe eigenvalues
are distinct we know that A is diagonalizable: we can write A = P
DP for





2 0
1 2
3
2
1
D=
and P =
. We also compute P
=
.
0 3
1
3
1
1
 2x

e
0
Now we compute eDx =
from the formula for exponentiating diagonal matrices.
0 e3x



  2x


1 2
e
0
3 2
3e2x 2e3x
2e2x 2e3x
Ax
Dx 1
Finally we have e = P e P =
=
3e2x + 3e3x 2e3x + 3e3x
1
3
0 e3x
1
1

For

= 2 we need to solve

0
3

Advanced Remark: More generally, there is a formula for the exponential

J =

eJx

for any Jordan block matrix

1
..

Canonical Form of

Using this formula one can compute

eAx

for any matrix

given the Jordan

A.

The idea is to write J = I + N , where N is the matrix

multiplying out that

Nn

1
0

1
..

1
0

1
0

One can check by directly

is the zero matrix.


 kn n
k
(Jx)k = xk (I + N )k = k I + k1 k1 N 1 + + kn

N +
, but since N n is the zero matrix, only the terms up through N n1 are nonzero. Then one can plug
Jx
these expressions into the innite sum dening e
and actually evaluate the innite sum explicitly.
x
2
xn1 x
e
xex x2 ex (n1)!
e

.
.

..
.
ex
xex
.

, if J is n n.
Eventually, one ends up with the answer eJx =
..
..
2

x
x
.
.

2 e

ex
xex

Therefore, the Binomial Theorem says

ex

Well, you're at the end of my handout. Hope it was helpful.


Copyright notice: This material is copyright Evan Dummit, 2012. You may not reproduce or distribute this material
without my express permission.

Potrebbero piacerti anche