Sei sulla pagina 1di 14

1

Math 130A HW3 Selected Solutions


C T

3 Chapter 3
3.1 Notes
In this chapter we completely solve and classify all autonomous 2-dimensional linear systems of s,
namely equations of the form
!0 !
x x
X =
0
=A = AX
y y
How do we do this? First, we reduce to some simple cases, called canonical cases, in which the
solution is more or less apparent. Then we reduce the more general cases to these by a coordinate
transformation. It turns out that all real 2 × 2 matrices can be reduced to one of exactly 3 canonical
forms (and if we consider complex-valued solutions, they really can be reduced to one of exactly 2).
The solution to the canonical form equation enables us to see the phase portrait of our solution.
The phase portrait of a differential equation is simply the graph of several different solution
curves in the plane !(a solution curve for the differential equation, recall, is given simply by consider-
x(t)
ing a solution as a parametric curve in the plane). The reason why we care is that the solution
y(t)
curves all behave in a certain way which is immediately apparent from looking at the phase portrait.
This also allows us to classify the stability of the equilibrium points. For 2D linear systems, the point
(0, 0) is always an equilibrium point, and there are more equilibrium points precisely when det A = 0.
This is simply because the set of equilibrium points is the solution set to AX = 0, which is a linear
equation. This means the set of all equilibrium points is either only (0, 0), or a whole line of solutions,
or the whole plane (but the only time this last case occurs is if A is the zero matrix, which makes for
a rather boring differential equation). This is where all that linear algebra stuff comes in.
When (0, 0) is the only equilibrium point, all the interesting stuff happens in a neighborhood of it.
This is what exploring phase portraits is all about. Like in 1D equations, solutions may go into (0, 0)
(a sink), or out of (0, 0) (a source). But solutions can do much more interesting things near (0, 0) in 2
dimensions, because there’s a lot more room to move around in.
It turns out that the nature of the equilbrium solutions and the phase portraits are completely de-
termined by the nature of the eigenvalues of the 2 × 2 matrix. This is because the canonical forms of
the matrices are completely determined by the eigenvalues. By applying the appropriate coordinate
transformation, the phase portrait of a general 2D linear system is simply some combination of rescal-
ing, rotation, and shearing of the phase portrait of its corresponding canonical solution, which does
not change any features we are interested in.

3.2 Recap of the Three Canonical Forms


Let’s recap what are the three (and only three) real canonical forms of real matrices.

3.2.1 Real and Distinct Eigenvalues


λ 0
!
If A is a 2×2 matrix with real, distinct eigenvalues (or a scalar matrix , the only time eigenvalues
0 λ
can be repeated without being the bad case below), then the canonical form associated to this is the
2 3 CHAPTER 3

diagonal matrix
λ1 0
!
D= .
0 λ2

λ2 0
!
Actually, this canonical form is not unique! For example, is also a canonical form. The
0 λ1
transformation T which changes A into the diagonal matrix is the matrix given by sticking the corre-
sponding eigenvectors side by side: the first entry on the diagonal should be the eigenvalue associated
with eigenvector in the first column of T and the second should be associated with the eigenvector in
the second column: if T ξ1 = λ1 ξ1 and T ξ2 = λ2 ξ2 then T = [ξ1 ξ2 ] transforms A into D, that is,
D = T −1 AT . If we decided to use the other canonical form, the corresponding transformation would
be [ξ2 ξ1 ] instead. But the upshot is, however you find D, it will always be true that A = T DT −1 , and
the choice of ordering is simply choosing whether you want the x-axis or the y-axis to correspond to
the axis defined by the first eigenvector. However, the choice does affect how you draw the phase por-
trait for the canonical solution. The canonical solution (i.e. general soution to the canonical equation
Y 0 = DY) is ! !
λ1 t 1 λ2 t 0
Y(t) = c1 e + c2 e
0 1
which means, as we know from the last homework, that the general solution to the original problem
X 0 = AX is
X(t) = T Y(t) = c1 eλ1 t ξ1 + c2 eλ2 t ξ2
The reason is simple: T (1, 0) is the first column of T , which, as we just explicitly constructed, is the
first eigenvector ξ1 . The rest of that c1 , c2 and e stuff are just scalars. T (0, 1) is the second column of
T , and hence the second eigenvector ξ2 .
The matrix T is not unique, either, and in fact, much more non-unique than D. Since any scalar
multiple of an eigenvector is an eigenvector, if you rescale the columns of T by (possibly different)
nonzero factors, D still remains the same. Rescaling the columns of T is equivalent to multiplying
T on the right by a diagonal matrix K. Then (T K)−1 AT K = K −1 T −1 AT K = K −1 DK = D, since
diagonal matrices always commute (because they just multiply componentwise).

3.2.2 Complex Eigenvalues

The second canonical form is for the case of complex eigenvalues. For A a 2×2 matrix with real entries
(the only ones we consider in this class), if the eigenvalues are complex, they must occur in conjugate
pairs. Namely, if λ1 = α + iβ then λ2 = α − iβ. In some sense, this means that one eigenvalue already
contains all the “information,” or in essence, the two-dimensionality of the system has shifted from the
two entries of a vector into a single complex number. If ξ is the complex eigenvector corresponding
to α + iβ then we can say, in essence, that ξ also contains all the information of the system. Thus
we take its real and imaginary parts, η1 = Re(ξ) and η2 = Im(ξ). These two vectors, which are not
eigenvectors—a point which bears repeating:

η1 and η2 are not eigenvectors of A!!!


—form a transformation T = [η1 η2 ] which takes A into the canonical form

α β
!
C=T −1
AT = .
−β α
3.2 Recap of the Three Canonical Forms 3

(note that this matrix is not diagonal!). To be honest, sometimes they do call η1 and η2 generalized
eigenvectors, but I give the big warning because applying A to one of these vectors is not going to be λ
times the vector. If we did want the diagonal matrix, we would in fact use both complex eigenvectors,
and the diagonal matrix would consist of the two complex eigenvalues α + iβ and α − iβ. However, the
problem is, the fully complex solution is, by its nature, 4-dimensional: two for the individual complex
entries, and two for the components of the vector. It is reasonable to want a real solution to a real
system of differential equations, so this less-simple a canonical form appropriately “extracts” the real
solutions we want.1 Here’s another way of thinking about it: a diagonal matrix is a 2-dimensional
subspace (a plane inside the 4-space) of all 2 × 2 matrices, and so are matrices which look like C; but
matrices that look like C are like a “skewed” or “rotated” version of this plane.
The canonical soution is
! ! ! ! !!
αt cos(βt) αt sin(βt) cos(βt) sin(βt) αt 1 αt 0
Y(t) = c1 e + c2 e = c e + c2 e .
− sin(βt) cos(βt) − sin(βt) cos(βt) 1 0 1
We will see why more clearly when we talk about the exponential of a matrix (that’s a couple of home-
works away from now). But for now, you can simply check for yourself that it works by calculating
Y 0 and seeing that it really does equal CY. The geometric interpretation of the matrix of sines and
cosines is a clockwise rotation by βt. In other words, the solution rotates about the origin clockwise
at an angular velocity of β (if β < 0 then it is counterclockwise at angular velocity |β|). The eαt terms
look exactly like the solutions for real and distinct eigenvalues, except the “eigenvalues” aren’t really
distinct—they are the single value α, as in the scalar matrix case. This controls how the solution grows
and shrinks as it rotates around the origin.
The solution to the original problem is then
! ! !! ! !
cos(βt) sin(βt) αt 1 αt 0 αt cos(βt) sin(βt) c1
X(t) = T Y(t) = T c e + c2 e = Te .
− sin(βt) cos(βt) 1 0 1 − sin(βt) cos(βt) c2
This does look pretty complicated, so let’s break it down: there are only 3 pieces of real data; first is
the transformation matrix (change of coordinates),
! second is the time-dependent part of the solution—
cos(βt) sin(βt)
one can think of eαt as being ONE thing—and finally the constants from which
− sin(βt) cos(βt)
we form our general solution (T −1 (c1 , c2 ) is the initial condition).
Again, the transformation matrix T is not unique; one may rescale it by any nonzero real number
(however the individual columns cannot be scaled by different factors, unlike the case for real and
distinct eigenvalues. Why?)

3.2.3 Real Repeated Eigenvalues

Finally we come to the exceptional case, if A has a repeated real eigenvalue λ. Again, unless we
started off with a diagonal matrix consisting of the same number on the diagonal, the corresponding
canonical form must be
λ 1
!
C= ,
0 λ
The transformation matrix is a little tricky to find. The first column should be an eigenvector ξ cor-
responding to λ. In this exceptional case, you will only have a one-dimensional space of eigenvectors
1
It turns out that matrices that look like C, namely same real numbers on the main diagonal, and opposite real numbers
along the other diagonal, behave very much like complex numbers. That these are associated with complex solutions is not
a coincidence.
4 3 CHAPTER 3

to choose from. The second column is going to be a vector η which will make {ξ, η} linearly inde-
pendent. This vector could actually be anything that is not a multiple of ξ. Unlike true eigenvectors,
you cannot arbitrarily choose any scale you want. This is because, remember, a 1 in the upper right
corner of something contributes a shear to A as a linear mapping; finding this scalar, then, corresponds
to compensating for this shear. The shear is a very distinct kind of linear transformation in the plane,
as compared to the more geometrically obvious rescaling (analogous to what real eigenvalues do) and
rotation (analogous to what purely imaginary eigenvalues do). So what we do is sort of a guess and
check (pull a rabbit out of a hat). We pick some independent vector η (which in practice should always
be either (1, 0) or (0, 1), since those are the easiest to work with) and we see what happens when we
apply A. What we want is for
Aη = ξ + λη,
because that’s mimicking the behavior of the action of C on standard basis vectors (1, 0) and (0, 1). In
other words, instead of η being merely stretched, which would make it an eigenvector, we want η to be
appropriately sheared. The procedure is: first choose w arbitrary. Then we find Aw − λw = (A − λI)w.
This is going to be a scalar multiple of ξ, say by µ. Then η = w/µ works. Note again that η is also not
an eigenvector, and I trust you to keep this in mind even though I don’t have it in big font. Finally,
T = [ξ η] is the matrix such that C = T −1 AT . Let’s summarize step-by-step.
Step 1. Let λ be the single eigenvalue, and ξ be the single eigenvector. This is your first column of T .

Step 2. Choose w to be linearly independent of ξ. You can almost always pick w = (1, 0). The only
time you cannot is when ξ is a multiple of (1, 0)—in that case, pick (0, 1) instead.

Step 3. Compute (A − λI)w. If w = (1, 0) this is just then first column of A minus λw.

Step 4. This must be a multiple of ξ. We “divide by ξ” to get this scalar, call it µ.

Step 5. η = w/µ is the second column of T . So T = [ξ η] = [ξ w/µ] is it.


The canonical solution Y, satisfying, as before, Y 0 = CY is
! ! ! ! !!
λt 1 λt t λt 1 λt 0 1
Y(t) = c1 e + c2 e = c1 e + c2 e +t
0 1 0 1 0

and the general solution X satisfying X 0 = AX is

X(t) = T Y(t) = c1 eλt ξ + c2 eλt (tξ + η)

The transformation T is very non-unique; for simplicity and definiteness, we always pick the choice
w = (1, 0) but in reality, any vector outside the span of the subspace of ξ will work.
Here is, as promised, the rabbit pulled out of the hat:

Figure 1: The rabbit pulled out of the hat.


3.3 Drawing Phase Portraits and Classifying Solutions 5

3.3 Drawing Phase Portraits and Classifying Solutions


The book does not make clear how exactly to draw phase portraits. It is true that when studying
qualitative properties of solutions, the exact way one draws the graphs is not really that important,
so long as you capture some essential, general features, like general direction of travel, straight-line
solutions, which axis the solutions hug, etc. But it is helpful to get some exact formulas which you
can graph for yourself, first. Once we have the phase portrait, we can then classify the nature of the
solution (i.e. classify the equilibrium point at (0, 0)).
What we could do is draw the parametric curves directly. This is easy to do in various mathemat-
ical programs, but hard to do by hand. To do it by hand, what we need to do is “eliminate the time
parameter” and draw the y curves as functions of x, or x as a function of y. First let’s do the real and
distinct eigenvalues case. The solution to the canonical form equation is
! !
λ1 t 1 λ2 t 0
Y(t) = c1 e + c2 e
0 1

or x(t) = c1 eλ1 t and y(t) = c2 eλ2 t . However, we can “force” or “shoehorn” eλ1 t into looking like eλ2 t by
 λ2 /λ1
raising it to the λ2 /λ1 power (provided λ1 , 0). In other words, eλ2 t = cx1 . So, we have, when
λ1 , 0,
!λ /λ
x 2 1
y = c2 = B|x|λ2 /λ1 .
c1
where B is just a real constant. In other words, the curves look like scaled power and root functions
and their reciprocals. If λ2 > λ1 > 0, or, λ2 < λ1 < 0, then y = B|x|a for some a > 1, which
means they are all “parabola”-like things that open upward or downward (they are genuine parabolas
if a = 2, and more flattened or squarish for a > 2, and very pointy for a close to 1. In this case, the
curves “hug” the x-axis, since all the parabola-thingys are tangent to the x-axis at 0. If λ1 > λ2 > 0
or λ1 < λ2 < 0, then they are root curves: y = B|x|b for some b < 1, which means x = C|y|λ1 /λ2 , i.e.
the previous case except turned on its side. For a scalar matrix, λ1 = λ2 so y = B|x| or a bunch of V’s.
But in all cases, if they are both positive, (0, 0) is a source, and solutions always travel away from
(0, 0); and both the eigenvalues negative means (0, 0) is a sink: solutions travel toward it. You should
draw arrows on the curves accordingly. Also there are two straight-line solutions, corresponding to
the “degenerate” cases.
For eigenvalues of differing signs, we instead have hyperbola-like things (graphs of y = 1/x ex-
cept possibly more flattened). This corresponds to (0, 0) being a saddle, because all these hyperbola-
thingys miss the origin. To see which way to draw the arrows, we find the two straight-line solutions;
anything traveling near the axis which corresponds to the negative eigenvalue points in the direc-
tion of the origin, and rounds a corner and starts traveling away from the origin, following the axis
corresponding to the positive eigenvalue.
For complex eigenvalues, things are actually easier. The part of the solution which looks like a
real solution is the eαt term, which correspond to the V’s described before, or the rays from the origin.
What the matrix does is apply an appropriate twist, corresponding to the rotation that all the solutions
undergo with time, turning a bunch of rays into, instead, a bunch of (exponentially growing) spirals.
The equilibrium point is a (spiral) sink if α < 0 and a source if α > 0; if α = 0 the point is instead a
center, which, along with a saddle, is a genuinely new kind of equilibrium point that occurs.
Finally, the repeated eigenvalues yield very unique and interesting solutions. One can think of an
a+1
analogy as follows. Recall that integration of a power function xa almost always leads to xa+1 . This
is true for all real a . . . except a = −1. Its integral is not something times x0 , which is constant, but
6 3 CHAPTER 3

rather something that actually grows. What is that function? The logarithm, of course. So we’ve seen
all these power-like and root-like curves, but when we hit an exceptional case, guess who joins the
party? You guessed it, log. The canonical solution is, recall,
! !
λt 1 λt t
Y(t) = c1 e + c2 e .
0 1
This says x(t) = c1 eλt + c2 teλt , while y(t) = c2 eλt . This says t = λ−1 ln(y/c2 ), or
y ln( cy2 ) λc1 − 1 1
!! ! !
c1 c1 1 y 1
x= y+ =y + ln =y + ln y = y C + ln y .
c2 λ c2 λ c2 λc2 λ λ
If you like, they look “sort of” like spirals, as if one were making a transition to a whole bunch of
2.4

1.6

0.8

-2.4 -1.6 -0.8 0 0.8 1.6 2.4

-0.8

-1.6

-2.4

Figure 2: Graphs of the curves occuring in the critical case.

parabola-like things tangent to the x-axis, and twisting it just enough to make it look like Figure 2.
They are not actually spirals, because the curves actually never wind around the origin (they would
fail the horizontal line test for x being a function of y were it the case). It may not be immediately
apparent from looking, but all of those curves are also tangent to the x-axis. If we tweak things a little
to make the solutions complex (we will go over this in the next solution on the Trace-Determinant
plane), the solutions actually are spirals, and they are no longer tangent to the x-axis. They can
emerge in any radial direction you choose. The upshot is, this is how one might conceive of this
situation as going from real decaying solutions (“overdamped systems”) to real oscillating systems
(“underdamped systems”) through this critical situation.
Finally, the equilibrium point (0, 0) is a sink if λ < 0 and a source if λ > 0; in other words, this
critical case does not lead to any new kinds of equilibrium points.
Nothing has been said about solutions with zero eigenvalues. This is another special case, because
one has one whole line of equilibrium points. The phase portrait is, up to a coordinate transformation
(skewing and so forth), the same as a bifurcation diagram for a 1-dimensional linear . See the
solution to problem 10.

3.4 Solutions
Ok, enough talk. Time for some action. I will provide solutions to all problems involving “exploring
a parameter space” in HW4—it will be the theme of HW4.
3.4 Solutions 7

Problem 3.1. Match the following to their phase portraits.

Solution. Obviously, all of them must have complex eigenvalues, because there is some kind of rota-
tional motion in all of them. Curves that close up on one another (i.e. forming ellipses as in plots 2
and 3, must have pure imaginary eigenvalues. Inward spiraling indicates a sink and outward spiraling
indicates a source. 

Problem 3.2. For each of the following systems X 0 = AX do the following:

(a) Find the eigenvalues and eigenvectors of A.

(b) Find the matrix T that puts A into canonical form.

(c) Find the general solution of both X 0 = AX and Y 0 = T −1 AT Y.

(d) Sketch the phase portraits of both systems.

Solution.
!
0 1
i. A = . The characteristic polynomial is λ2 − 1 = 0, or eigenvalues λ1 = 1 and λ2 = −1.
1 0
(1, 1) is an eigenvector corresponding to (1, 1) and (1, −1) is an eigenvector for −1. This is part
(a). So writing them in order we find:
!
1 1
T=
1 −1

and the canonical form matrix (you don’t actually have to carry this calculation out, because the
theory shows that you will get the diagonal matrix consisting of the eigenvalues. You could do it
as a check, though. . . )
!
1 0
D = T −1 AT =
0 −1
(this is part (b)). This enables to write down the general solutions to Y 0 = DY and X 0 = AX:
! !
1 0
Y(t) = c1 et
+ c2 e−t
0 1
! !
t 1 1
X(t) = T Y(t) = c1 e + c2 e−t .
1 −1

And that is part (c). Consulting our guide above, we see that the solution curves for Y give us the
family y = C|x|−1/1 = C|x|−1 which give hyperbolas which open along y = x, and the equilibrium
point (0, 0) is a saddle. Since the canonical x-solution is the one with the negative eigenvalue,
canonical solutions flow outward to infinity along the x-axis. For the original problem, they flow
outward along the x-eigenvector, (1, 1).

!
1 1
ii. A = . The characteristic polynomial is λ2 − λ − 1 leading to roots λ = 1±2 5 . We take ϕ
1 0
to be the solution with the + sign. ϕ = λ1 is the golden ratio, for anyone curious. The second
eigenvalue we’ll call ψ. This leads to another saddle, since the eigenvalues are real, distinct, and
8 3 CHAPTER 3

1.5 1.5

1.0 1.0

0.5 0.5

!1.5 !1.0 !0.5 0.5 1.0 1.5 !1.5 !1.0 !0.5 0.5 1.0 1.5

!0.5 !0.5

!1.0 !1.0

!1.5 !1.5

(a) Phase Portrait for Canonical Solution (b) Actual Phase Portrait

Figure 3: Part (d), phase portraits for matrix (i).

√ √
with opposite signs. The eigenvectors are then (1, 12 (−1 + 5)) and (1, − 12 (1 + 5)). This gives
us T and D:  √ 
 1+ 5
 
 1√ 1√  0
T =  −1+ 5 −1− 5  , D =   .

2 √ 
1− 5 
2 2 0 2
So therefore
! !
ϕt 1 ψt 0
Y(t) = c1 e + c2 e
0 1
! !
1 1
X(t) = T Y(t) = c1 eϕt + c2 eψt .
−ψ −ϕ

We note that y = C|x|ψ/ϕ which is negative and of magnitude less than 1. So the decay is slower
as x → ±∞ and thus the saddle “hugs” the y-axis, or the axis corresponding to the eigenvector
(1, −ϕ).

!
1 1
iii. A = . The char. poly. is now λ2 − λ + 1, so that λ = 1±2i 3 , complex roots. We let α = 12
−1 0

and β = 23 so that λ = α + iβ is the eigenvalue with a + sign. λ is a sixth root of unity. The

eigenvector is then ξ = (1, λ − 1) = (1, 12 (−1 + i 3)) (the second component is a cube root of

unity). So this gives us the transformation vectors η1 = Re(ξ) = (1, − 12 ) and η2 = Im(ξ) = (0, 3
2 )
or  √ 
3
 
 1 0   1
T =  1 √ , = 2 2  .


3
 C 
 3 1
−2 2 
− 2 2
This leads to the canonical solution
 √   √ 
 cos( 3 t)   sin( 3 t) 
Y(t) = c1 et/2 
 2√  + c2 e 
t/2 √2  .
− sin( 23 t) cos( 23 t)

3.4 Solutions 9

1.5 1.5

1.0 1.0

0.5 0.5

!1.5 !1.0 !0.5 0.5 1.0 1.5 !1.5 !1.0 !0.5 0.5 1.0 1.5

!0.5 !0.5

!1.0 !1.0

!1.5 !1.5

(a) Phase Portrait for Canonical Solution (b) Actual Phase Portrait

Figure 4: Part (d), phase portraits for matrix (ii).

By our choice of β, and the fact that α > 0, it is a spiral source, and so spirals outward clockwise.
The solution to the original problem is
 √   √ 
3 3
cos( t) sin( t)
X(t) = c1 e  + c2 e 

t/2 
 t/2
 
 √ 2√ √ √ 2√ √ 
− 12 cos( 23 t) − 23 sin( 23 t) − 12 sin( 23 t) + 23 cos( 23 t).


By our nifty shortcut in problem 9, we can determine the direction of the spiralling for the solution
by looking at the sign of the second entry of A. It is 1 > 0 so the transformation T preserves
orientation, and the solution spirals outward clockwise as well.
1.5 1.5

1.0 1.0

0.5 0.5

!1.5 !1.0 !0.5 0.5 1.0 1.5 !1.5 !1.0 !0.5 0.5 1.0 1.5

!0.5 !0.5

!1.0 !1.0

!1.5 !1.5

(a) Phase Portrait for Canonical Solution (b) Actual Phase Portrait

Figure 5: Part (d), phase portraits for matrix (iii).


10 3 CHAPTER 3

!
1 1
iv. A = . The characteristic polynomial is λ2 − 4λ + 4, which leads to the repeated root
−1 3
!
2 1
λ = 2. Since the matrix is not already diagonal, we know its canonical form must be C = ,
0 2
that is, it must have a 1 in the upper right corner. Our one eigenvector is ξ = (1, 2 − 1) = (1, 1).
Now let’s take w = (1, 0). Then Aw = (1, −1) and 2w = (2, 0). So subtracting the two, we get
(−1, −1). This is −ξ, and so µ = −1 in the above step-by-step guide. Therefore we should take
η = w/(−1) = (−1, 0). All in all, we have
! !
1 −1 2 1
T= , C=
1 0 0 2

The canonical solution Y 0 = CY


! !
1 2t t
Y(t) = c1 e 2t
+ c2 e
0 1

Now the solution to the original problem is:


! !
1 2t t − 1
X(t) = T Y(t) = c1 e 2t
+ c2 e
1 t

Since λ > 0, (0, 0) is a source and solutions tend away from the origin.
1.5 1.5

1.0 1.0

0.5 0.5

!1.5 !1.0 !0.5 0.5 1.0 1.5 !1.5 !1.0 !0.5 0.5 1.0 1.5

!0.5 !0.5

!1.0 !1.0

!1.5 !1.5

(a) Phase Portrait for Canonical Solution (b) Actual Phase Portrait

Figure 6: Part (d), phase portraits for matrix (iv).

√ √
!
1 1
v. A = . The characteristic polynomial is λ2 + 2λ − 2 which gives λ = −2± 2 4+8 = −1 ± 3.
−1 −3
This will give a saddle (darn it. I was hoping to get two real eigenvalues of the same sign).

!
1 1
vi. A = The characteristic polynomial is λ2 − 2 = 0 or λ = ± 2. Great. Another saddle.
1 −1
Come to office hours if you really want to see these.
3.4 Solutions 11

Problem 3.3. Solve the following harmonic oscillators. See solution to problem 4, which will be in
the HW 4 solution.

Problem 3.4. Consider the harmonic oscillator, x00 + bx0 + kx = 0, which, as a system, looks like:
!0 !
x 0 1
X =
0
= X
y −k −b

(This is a tried and true problem, and you’ve seen it many times already. A complete solution will be
given in HW4 solution, because I want you to see it in the context of exploring parameter spaces)

Problem 3.5. Sketch the phase portrait for a certain family of DiffEq’s (I will solve this problem in
the HW4 solution).

λ 1
!
Problem 3.7. Show that the canonical system with repeated eigenvalues, Y =0 Y, has solutions
0 λ
that are tangent to (1, 0) (i.e. tangent to the x-axis) at the origin.

Solution. This is a very good problem, though it may seem intimidating. First, one should note that no
solution to this  ever actually reaches (0, 0) (unless we start at (0, 0), and thus stay there forever). So
it is not a matter of plugging in actual values. Instead, the solution either approaches (0, 0) as t → ∞
dy
(if λ < 0) or as t → −∞ (if λ > 0). Now “tangent to the x-axis” means simply that dx = 0 at (0, 0).
dy dy/dt
What is dx ? Well, if you remember the old trick of regarding derivatives as fractions, it is dx/dt = y0 /x0 .
But, setting (x(t), y(t)) to our canonical solution Y(t) we find that x0 (t) = λc1 eλt + λc2 teλt + c2 eλt while
y0 (t) = λc2 eλt . Finding y0 /x0 we have that the eλt ’s cancel out, so we have

dy λc2
=
dx λc1 + λtc2 + c2
which quite plainly goes to 0 when taking t → ±∞ (precisely which limit to take, +∞ or −∞, is still
important even though the limits happen to be the same, because it affects what point dy/dx is being
taken at; we want it to be taken at the origin, so we must choose the limit at which the solution actually
goes to the origin). 

Problem 3.9. Determine a computable condition for when a solution with complex eigenvalues spirals
counterclockwise or clockwise.

Solution. This is actually a very interesting problem. A very, very interesting problem. Let A be our
α β
!
matrix, with eigenvalues λ = α ± iβ and canonical form C = . We may as well assume that
−β α.
β > 0, because we can simply reorder our eigenvalues if not. In practice, β occurs as the term with
the square root in the solution to the quadratic, so taking the eigenvalue to be the positive imaginary
square root (i.e. taking the plus sign in the quadratic formula), this will give us a positive β anyway.
By pure inspection, or whatever, the canonical solution, with positive β, always rotates clockwise.
This is because the rotation matrix that occurs in the canonical solution denotes clockwise rotation at
a rate of β radians per unit time. But how do we get the original solution from the canonical solution?
The transformation matrix T , of course. If T is orientation-preserving, that is, det T > 0, then the
spirals will go the same way the canonical solution goes (which depends on the sign of β). On the
12 3 CHAPTER 3

other hand, if the transformation is orientation-reversing, i.e. det T < 0, then the behavior is opposite
from that of the canonical solution. How do we determine the nature of T from A?
Remember our Totally Awesome Shortcut. If (a b) is the first row of A, then an eigenvector
associated to it is (b, λ − a) = (b, (α − a) + iβ). Separating into real and imaginary parts, we get
ξ = (b, α − a) and η = (0, β). So we have
!
b 0
T= .
α−a β

So det T = bβ which, since β > 0, is positive exactly whenever b > 0 and negative when b < 0.
Therefore, if the upper-right entry is positive, the solutions rotate clockwise, and if it is negative, it
rotates counterclockwise. This is the so-called “computable condition”: look at the sign of the second
entry!


Problem 3.10. Consider the system X 0 = AX where the trace of A is not zero, but det A = 0. Find the
general solution of this system and sketch the phase portrait.

Solution. The characteristic polynomial is λ2 − (tr A)λ! = 0 or λ1 = 0 and λ2 = tr A. The eigenvalues


a b
are therefore always real and distinct. Let A = . Then the 0 eigenvector is (−b, a), and corre-
c d
sponding to a + d is (−b, a − a − d) = (−b, −d). Thus the transformation matrix and canonical form
is ! !
−b −b 0 0
T= , D= .
a −d 0 a+d
! !
1 0
The canonical solution is Y(t) = c1 + c2 e(a+d)t = (c1 , c2 e(a+d)t ). The phase portrait is a bunch of
0 1
straight, vertical lines; the solutions move away from the x-axis (the line of equilibrium points) if the
trace is positive, and toward it if the trace is negative. The solution to the original problem is
! !
−b (a+d)t b
X(t) = c1 + c2 e
a d

Let’s actually ask ourselves what the canonical system means. Let τ = a + d. Then the we have
that the differential equation is x0 = 0 and y0 = τ, a constant. This says y = τt + c0 . In a very real
sense, the coordinate x is redundant. It is FAPP2 a 1D equation. One could regard x as a parameter,
but the solution to this canonical form doesn’t even change with the parameter x. The phase portrait
is in fact the “bifurcation diagram,” where the line y = 0 is the line of equilibrium points. They are all
sources if τ > 0 and sinks if τ < 0.
To really drive the point home, consider the family of s y0 = cy + a where a is now the
parameter. Then setting x = a we have
!0 ! !
x 0 0 x
=
y 1 c y

Then our canonical transformation is !


−c 0
T= .
1 c
2
For all practical purposes.
3.4 Solutions 13

and we have ! !
−c ct 0
X(t) = c1 + c2 e
1 c
The phase portrait is still a collection of vertical lines, and y = 0 and they are sources or sinks
depending on whether c > 0 or c < 0. Consider the initial condition X(0) = (x, y0 ) (x is always
constant). This says that T (c1 , c2 ) = (x, y0 ) or (c1 , c2 ) = T −1 (x, y0 ) = (−cx, x+cy0 )/c2 = (−x/c, x/c2 +
y0 /c).
The family of solutions is then y(t) = − ac + ( ac + y0 )ect and the phase portrait for the 2D system is
indeed the bifurcation diagram of this family. 

Problem 3.14. Let A be a 2 × 2 matrix. Show that A satisfies its own characteristic equation: A2 −
(tr A)A + (det A)I = 0. Use this to show that if A has real, repeated eigenvalues, then for any v inR2 , v
is either an eigenvector for A or (A − λI)v is an eigen vector for A.
Solution. The fact that A satisfies its own
! characteristic polynomial is known as the Cayley-Hamilton
a + bc b(a + d)
2
!
a b
Theorem. We have that if A = , then A2 = . Now
c d c(a + d) d2 + bc

a2 + bc − a(a + d)
! !
0 bc − ad 0
A − (a + d)A =
2
=
0 d2 + bc − d(a + d) 0 bc − da
which clearly is −(det A)I. Now if A has repeated eigenvalues, let ξ be an eigenvector for A. The
characteristic polynomial must be p(x) = (x − λ)2 . Therefore p(A) = (A − λI)2 = 0. If v is not an
eigenvector for A, then (A − λI)v , 0. This means

0 = (A − λI)2 v = A(A − λI)v − λ(A − λI)v.

Therefore
A[(A − λI)v] = λ[(A − λI)v],
which, along with the fact that (A − λI)v , 0, shows that (A − λI)v is an eigenvector for A. 

Problem 3.16. Consider the nonlinear system

x0 = |y|
y0 = −x

Sketch its phase portrait.


Solution. First off, let us not forget a very general principle. Differential equations are local in nature.
In other words, at a particular point in the plane, the solutions depend only on what is happening in
a small neighborhood about that point. Therefore, in particular, at any point (x0 , y0 ) away from the
x-axis (i.e. y = 0), we have that the differential equation, in a small enough neighborhood of the point
(x0 , y0 ), is either x0 = y, y0 = −x (if y0 > 0) or x0 = −y, y0 = −x, if y0 < 0. What does this mean? It
means that we simply solve one differential equation in the upper half-plane, namely
!
0 1
X =
0
X
−1 0
and in the lower-half plane we solve !
0 1
X =
0
X.
1 0
14 3 CHAPTER 3

and literally just stick the results together at the x-axis. The former system is a system with purely
imaginary eigenvalues and therefore has a phase portrait that looks like a bunch of semicircles in the
upper half-plane (they would be full circles if we were solving the equation in the whole plane). On
the other hand, the latter system has real and distinct eigenvalues, ±1. It is a bunch of hyperbolas.
The eigenvectors are (1, 1) and (1, −1),√and the transformation it describes is a rotation by π/4, and
reflection, and a scaling by a factor of 2. 
3

!3 !2 !1 1 2 3

!1

!2

!3

Figure 7: Phase portrait for this system.

Potrebbero piacerti anche