Sei sulla pagina 1di 13

WMS

MA133
Differential Equations

Revision Guide

Written by David McCormick

WMS
ii MA133 Differential Equations

Contents
0 Introduction and Notation 1

1 First-Order ODEs 1

2 Second-Order Linear ODEs 4

3 Introduction to Difference Equations 5

4 Systems of First-Order ODEs 7

Introduction
This revision guide for MA133 Differential Equations has been designed as an aid to revision, not a
substitute for it. Differential Equations is very much an applied course, in which the emphasis is on
problem-solving rather than justifying every step. So, the best way to revise is to use this revision guide
as a quick reference for the theory, and to just keep trying example sheets and past exam questions.
Hopefully this guide should give you some confidence by showing you that there isn’t too much to the
module: at least not as much material as it appears from the wad of lecture notes and assignments you
have amassed.

Disclaimer: Use at your own risk. No guarantee is made that this revision guide is accurate or
complete, or that it will improve your exam performance. Use of this guide will increase entropy,
contributing to the heat death of the universe. Contains no GM ingredients. Your mileage may vary.
All your base are belong to us.

Authors
Written by D. S. McCormick (d.s.mccormick@warwick.ac.uk).
Based upon lectures given by Dave Wood at the University of Warwick, 2005 and 2006. Further additions
were made in 2012 by Matthew Lee (matthew.lee@warwick.ac.uk) following advice from Jess Lewton.
Many thanks to The Ghost Readers’ efforts to ensure these notes were as accurate as possible.
Any corrections or improvements should be entered into our feedback form at http://tinyurl.com/WMSGuides
(alternatively email revision.guides@warwickmaths.org).
MA133 Differential Equations 1

0 Introduction and Notation


First, a few key definitions and notations:

• Variables measure things – the dependent variables depend on the independent variables; for exam-
ple, for a moving particle, as time (the independent variable) passes, the position (the dependent
variable) changes, which we can express as x = x(t). We are interested in rates of change, i.e. the
dependent variable(s) differentiated with respect to the independent variable(s); for our example
we might consider dx
dt .

dy
• A differential equation is an equation relating variables and their derivatives, e.g. dx = f (x, y).

• If the equation has only one independent variable, it is called an ordinary differential equation or
dy
ODE. When it’s obvious what this independent variable is, we use the notation y 0 for dx ; if the
dx
independent variable is time, we use the notation ẋ for dt .

• If it has more than one independent variable, for example u(x, y), this has two partial derivatives,
∂u ∂u
∂x and ∂y ; a differential equation involving this kind of function is called a partial differential
equation or PDE 1 .

• The order of an differential equation is the order of the highest derivative in the equation, for
d2 y
example dx 2 = −y is second-order.

• A linear ODE of order n depends linearly on the dependent variable and its derivatives, i.e. it can
be written in the form
dn y dy
an (t) n + · · · + a1 (t) + a0 (t)y = f (t),
dt dt
dy 2
and does not involve nonlinear functions of the dependent variable such as y 2 or ( dx ) or sin y.

1 First-Order ODEs
dy
The simplest ODEs are those of the form dx = f (x). We can solve these using the Fundamental Theorem
of Calculus (FTC):
Rx dg
Theorem 1.1 (FTC). For f : [a, b] → R, define g(x) = a f (x̃) dx̃. Then dx (x) = f (x). Furthermore,
Rb 0
a
f (x) dx = F (b) − F (a) for any F with F (x) = f (x) (i.e. any antiderivative of f ).
dy
R
This means that the general solution to dx = f (x) is y(x) = f (x) dx = F (x) + c, where F is any
antiderivative of f and c is a constant of integration. We specify a particular solution by saying what
F (x) is at a particular value of x; this kind of problem is known as an initial value problem.
Probably the most general initial value problem for first-order ODEs is
dx
(t) = f (x, t), x(t0 ) = x0 . (1)
dt
A solution of this ODE is a function x(t) with x(t0 ) = x0 and ẋ(t) = f (x, t).
In general, the initial value problem in equation (1) may not have a solution, or it may have more
than one. We will assume that f is sufficiently “nice2 ” to prevent such nasty things happening, in which
case the IVP will have exactly one solution.
Even if we are guaranteed that the solution exists, we cannot always find R t an explicit solution to a
−t2 −s2
given ODE. For example, solving dx dt = e using the FTC yields x(t) = x 0 + 0
e ds, where x(0) = x0 ;
this integral cannot be evaluated explicitly. Here, and elsewhere, qualitative solution methods become
as important as quantitative ones.
1 Weare not interested in PDEs here. These will be studied in detail in MA250 Introduction to PDEs.
2 The technical requirements are that f (x, t) and ∂f ∂x
(x, t) be continuous in some rectangle containing (x0 , t0 ). Don’t
worry if this is all a bit confusing; this is an applied course, so we don’t worry too much about technical details. The proof
will come in MA244 Analysis III.
2 MA133 Differential Equations

1.1 Linear First-Order ODEs


Linear ODEs are those which do not involve powers of the dependent variable and its derivatives; in
general linear ODEs are among the easiest to solve, and in many cases we can find explicit solutions.
Here we consider linear first-order ODEs.
We firstRconsider the homogeneous case, when dx dt + r(t)x = 0. In this case, the general solution is
x(t) = Ae− r(t) dt , which we can check by differentiating:
Z  R
dx d
= −A r(t) dt e− r(t) dt = −r(t)x(t).
dt dt
In the inhomogeneous case, we consider equations of the form
dx
+ r(t)x = g(t). (2)
dt
When g(t) ≡ 0 the equation reduces to the homogeneous case, and when r(t) ≡ 0 the equation can be
solved using the FTC. In the general case, however, we seek to reduce the left-hand side to somethingR
we can integrate easily. To do so, we multiply both sides of (2) by a so-called integrating factor, e r(t) dt ,
to give
R dx R R
e r(t) dt + r(t)e r(t) dt x = e r(t) dt g(t).
dt
We now note that the left-hand side is what you get when you differentiate a product; so the equation
becomes
d  R r(t) dt  R
e x = e r(t) dt g(t).
dt R R
Integrating both sides and dividing through by e r(t) dt (since e r(t) dt 6= 0) yields
R
Z R R
x(t) = e− r(t) dt e r(t) dt g(t) dt + Ae− r(t) dt ,
| {z } | {z }
xp (t) xh (t)

where A is a constant of integration. There are two parts to the solution: xp (t) is called the particular
solution, since it is particular to g(t), and xh (t) is called the complementary solution, since it is the
solution to the homogeneous case dx dt + r(t)x =
R 0.
As long as we can perform the integration r(t) dt, we can always find an expression for the solution;
R R r(t) dt
and if we can perform the integration e g(t) dt, that solution is explicit.
Example 1.2. As with most solution methods for ODEs, it is much better to be able to apply the
solution method than to memorise the solution formula by rote. So we now consider the ODE
dy
(x2 + 1) + 4xy = 12x
dx
dy
as an example. We first divide through by (x2 + 1) to get rid of the coefficient in front of the dx term,
getting
dy 4x 12x
+ y= 2 ,
dx x2 + 1 x +1
which is linear. The integrating factor is
Z 
4x
exp dx = exp(2 log(x2 + 1)) = (x2 + 1)2 .
x2 + 1
So multiplying both sides by (x2 + 1)2 gives
dy d  2
(x2 + 1)2 + 4x(x2 + 1)y = (x + 1)2 y = 12x(x2 + 1).

dx dx
Integrating both sides gives (x2 + 1)2 y = 3(x2 + 1)2 + c, so the general solution is
c
y(x) = 3 + 2 ,
(x + 1)2
where c is a constant of integration.
MA133 Differential Equations 3

1.2 Separable Equations


Another general class of first-order ODEs we can (often) solve explicitly are so-called separable equations,
of the form
dx
= f (x)g(t). (3)
dt
They are called separable since we can separate the dependent and independent variables: the idea is
that you “divide both sides by f (x), multiply both sides by dt” and then integrate both sides” to give
Z Z
1
dx = g(t) dt.
f (x)
If we want to take into account an initial condition x(t0 ) = x0 , we can integrate with limits:
Z x(t) Z t
1
dx̃ = g(t̃) dt̃.
x0 f (x̃) t0

This assumes, however, that f (x) 6= 0 for all x. If f (x0 ) = 0 for some particular x0 , then the constant
function x(t) = x0 is also a solution of (3) since then dx
dt = 0 = f (x0 )g(t).

dy 2
Example 1.3. Solve the initial value problem = with the condition y(0) = 0.
dx sin y
1
Noting that sin y 6 0 for any y, we see there are no constant solutions. So separating variables gives
=
Z y Z x
sin ỹ dỹ = 2 dx̃.
0 0

Integrating gives π − cos y = 2x, and hence y = arccos(π − 2x).


Sometimes we cannot find an explicit solution, but only an implicit relationship between the inde-
pendent and dependent variables:
dy y(5x − 2)
Example 1.4. Find the general solution of the ODE = for x, y > 0.
dx x(1 − 3y)
y
We first note that 1−3y = 0 if y = 0, but we are assuming that x, y > 0; hence there are no constant
solutions. Looking for non-constant solutions, we separate the variables and integrate to get
Z   Z  
1 2
− 3 dy = 5− dx.
y x
1 2
Since x and y are positive we can ignore the modulus signs arising from the integration of y and x, so
log y − 3y = 5x − 2 log x + c.
We can do no better than this implicit solution relating x and y.

1.3 Autonomous First-Order ODEs


When we cannot find an explicit solution to an ODE, we can turn to qualitative solution methods. For
equations of the form dx dx
dt = f (x), i.e. where the derivative dt does not depend on t, we can find the
qualitative behaviour of the equation simply by considering f (x).
Let x(t) denote the position of a particle on the x-axis at time t, in which case dx dt denotes its velocity
at that instant. Finding the points x∗ such that f (x∗ ) = 0 gives us the fixed points at which dx dt = 0,
i.e. a particle at x∗ does not move.
If f 0 (x∗ ) < 0, then x∗ is a stable fixed point, since as t → ∞ starting near x∗ the particle gets pulled
towards it. This is because for x < x∗ , dx dt > 0 so the particle moves to the right, i.e. towards x∗ ; similarly
for x > x∗ , dx dt < 0 so the particle moves to the left. Alternatively if f 0 (x∗ ) > 0, then x∗ is an unstable
fixed point, since when x 6= x∗ , the asymptotic behaviour of x(t) as t → ∞ is to move away from x∗ .
The easiest way to determine the stability is to sketch a graph of y = f (x); the points where y = 0
give the fixed points, and the slope of the graph at those points gives the stability (negative slope for
stable fixed points, positive slope for unstable fixed points, zero slope indicates structural instability).
4 MA133 Differential Equations

2 Second-Order Linear ODEs


We now turn our attention to linear second-order ODEs, whose most general form is

d2 x dx
a(t) + b(t) + c(t)x = f (t). (4)
dt2 dt
The underlying assumption of linearity means that if x1 (t) and x2 (t) are two solutions of (4) then so is
αx1 (t) + βx2 (t) for any choice of α, β ∈ R. Furthermore, if the two solutions are linearly independent,
i.e. α1 x1 (t) + α2 x2 (t) = 0 =⇒ α1 = α2 = 0, then these solutions are necessary and sufficient to obtain
all solutions to (4). In addition to two linearly independent solutions, we need not one but two initial
conditions. In this case, the IVP

d2 x dx
a(t) + b(t) + c(t)x = f (t), x(t0 ) = x0 , ẋ(t0 ) = v0 , (5)
dt2 dt
has a unique solution if the functions a, b, c, f are “nice”; we will assume that they are. A general solution
to (4) will contain two arbitrary constants.

2.1 Homogeneous Linear Second-Order ODEs with Constant Coefficients


When a(t), b(t), c(t) in (5) are constants, we can often solve (5) explicitly, depending on the form of
f (t). We first consider the homogeneous case, i.e. when f (t) ≡ 0, and look for solutions of the equation

d2 x dx
a +b + cx = 0. (6)
dt2 dt
Trying a solution of the form x = ekt gives ak 2 ekt + bkekt + cekt = 0. Noting that ekt 6= 0 yields the
auxiliary equation ak 2 + bk + c = 0. For ekt to solve the equation, k must solve the auxiliary equation,
and the values of a, b and c determine how many roots the auxiliary equation has:

Case 1 – Two real roots k1 , k2 When the auxiliary equation ak 2 + bk + c = 0 has two real roots k1 ,
k2 , we have x1 (t) = Aek1 t and x2 (t) = Bek2 t as two linearly independent solutions. Hence the general
solution of (6) in this case is x(t) = Aek1 t + Bek2 t . As always, an example is worth a thousand theorems:

Example 2.1. Solve y 00 − 6y 0 + 8y = 0 subject to y(0) = 1, y 0 (0) = 0.


Here the auxiliary equation is k 2 − 6k + 8 = 0, which has roots k = 2, 4. Hence the general solution
is y(x) = Ae2x + Be4x . To find the particular solution we substitute in y(0) = A + B = 1 and
y 0 (0) = 2A + 4B = 0; solving these gives A = 2, B = −1, so y(x) = 2e2x − e4x is the particular solution.

Case 2 – One repeated root k When the auxiliary equation has one repeated real root, we only
get one solution x1 (t) = Aekt . Using a method known as reduction of order yields the second solution
x2 (t) = Btekt , giving the general solution as x(t) = (A + Bt)ekt .

Example 2.2. Solve y 00 − 8y 0 + 16y = 0 subject to y(0) = 3, y 0 (0) = 10.


In this case the auxiliary equation is k 2 − 8x + 16 = 0 with repeated root k = 4. Hence the general
solution is y(x) = (A + Bx)e4x . To find the particular solution we substitute in y(0) = A = 3, and
y 0 (0) = 4A + B = 10, yielding B = −2; hence y(x) = (3 − 2x)e4x is the particular solution.

Case 3 – Two complex roots k = p ± iq As in the two real roots case, in the complex case the
general solution is x(t) = αe(p+iq)t + βe(p−iq)t . By using Euler’s formula we can write this in the
form x(t) = ept (A cos qt + B sin qt), where A and B are √
real; using trig identities we can write this as
x(t) = Cept cos(qt − φ), where φ = arctan B/A and C = A2 + B 2 .

Example 2.3. Solve y 00 − 6y 0 + 13y = 0 subject to y(0) = 1, y 0 (0) = −1.


In this case the auxiliary equation is k 2 − 6k + 13k = 0, with roots k = 3 ± 4i. Hence the general
solution is y(x) = e3x (A cos 4x + B sin 4x). To find the particular solution we substitute in y(0) = A = 1,
and y 0 (0) = 3A + 4B = −1 =⇒ B = −1. Hence y(x) = e3x (cos 4x − sin 4x).
MA133 Differential Equations 5

The equation mẍ + cẋ + kx = 0 models a mass–spring system; the mass is m, the spring constant is k
and the friction damping is measured by c. This has four cases; if c = 0 there is no damping, but if c 6= 0,
then the number of roots of the auxiliary equation determines the kind of motion; for c2 − 4mk > 0 we
call the system underdamped, c2 − 4mk = 0 is critically damped and c2 − 4mk < 0 is overdamped.

2.2 Inhomogeneous Linear Second-Order ODEs with Constant Coefficients


In the inhomogeneous case, where f (t) is non-zero, i.e.

d2 x dx
a +b + cx = f (t), (7)
dt2 dt
we note that by linearity if xp (t) is any solution of (7) and xh (t) is the general solution of the homogeneous
case (6), then the general solution of (7) is given by x(t) = xh (t) + xp (t), provided xp (t) is linearly
independent of xh (t). So to solve (7) we solve the homogeneous case to find the complementary function
xh (t), find a particular integral xp (t), and add the two solutions together. We find xp (t) by the method
of “inspired guesswork”; we try a general form, differentiate it and substitute in. Functions to “guess”
are as follows:

f (t) Try solution xp (t) =


aekt (and k is not a root) Aekt
aekt (and k is k a root) Atekt
kt
ae (and k is k a repeated root) At2 ekt
a sin(ωt) or a cos(ωt) A sin(ωt) + B cos(ωt)
atn where n ∈ N P (t)
atn ekt P (t)ekt
n
t (a sin(ωt) + b cos(ωt)) P1 (t) sin(ωt) + P2 (t) cos(ωt)
ekt (a sin(ωt) + b cos(ωt)) ekt (A sin(ωt) + B cos(ωt))

Here P (t), P1 (t), P2 (t) are general polynomials of degree n, i.e. a0 + a1 x + . . . an xn . In general, try a
particular integral of the same general form as f (t); if that solves the homogeneous case then multiply
by t until it doesn’t.

Example 2.4. Find the general solution of y 00 − 6y 0 + 8y = e2x .


We know from example 2.1 that yh (x) = Ae2x + Be4x solves the homogeneous case. As e2x thus
solves the homogeneous case, we try a particular integral of the form yp (x) = Cxe2x . Differentiating
and substituting in gives e2x [4Cx + 3C − 6(2Cx + C) + 8Cx] = e2x and hence that C = − 31 , hence the
general solution is y(x) = Ae2x + Be4x − 13 xe2x .

3 Introduction to Difference Equations


So far we have been mainly interested in finding explicit solutions to ODEs. However, the circumstances
in which we can find such explicit solutions are limited, and we therefore often turn to methods of
numerical approximation. The basic principle is that we increase the time t in small increments and
generate an approximation to the solution, thus considering time not as a continuous variable but rather
in discrete steps. The idea of discrete solutions turns out to be quite profitable, since often we only
want a model relating one day/hour/year to the next. In general, an equation relating one value to the
previous values, such as
xn+1 = f (xn , xn−1 , . . . )
is called a difference equation, where the values (xn )∞
n=0 form a sequence.

Example 3.1 (Euler’s method). Consider an initial value problem

dx
= f (t, x), x(0) = x0 . (8)
dt
6 MA133 Differential Equations

Euler’s method is a method of obtaining approximate solution values at a discrete set of points. For this
method, we choose a small time step h and make the assumption that over that interval h the derivative
dx 3
dt is constant, and so, by the Taylor expansion

x(t + h) = x(t) + hẋ(t) = x(t) + hf (t, x(t)).

(We can ignore subsequent terms in the Taylor expansion since we are assuming that dxdt is constant over
d2 x
the small timestep h, and thus dt2 = 0.) Starting with x0 = x(0), we can substitute in to get

xn+1 = xn + hf (kh, xk ).

Definition 3.2. The order of a difference equation is the difference between the highest index of x and
the lowest, e.g. xn+5 − xn+3 = en+1 has order 2.
Under certain circumstances, we can find explicit solutions to difference equations, much like we could
with ODEs.
Example 3.3. If we have a simple first-order linear homogeneous difference equation xn+1 = kxn , by
iterating this we get the solution

xn = kxn−1 = k 2 xn−2 = · · · = k n−1 x1 = k n x0 .

3.1 Second-Order Linear Difference Equations with Constant Coefficients


We now turn our attention to difference equations of the form

xn+2 + axn+1 + bxn = f (n). (9)

In the homogeneous case, i.e. when f (n) ≡ 0, by analogy with first-order difference equations, we try
a solution of the form xn = Ck n for some k, giving us Ck n+2 + Cak n+1 + Cbk n = Ck n (k 2 + ak + b) = 0.
One trivial solution of this is k = 0, i.e. xn = 0 for all n. Alternatively, k is a root of the auxiliary
equation k 2 + ak + b = 0. By analogy with second-order differential equations, we have three cases
depending on the roots of the auxiliary equation.
Distinct real roots k1 , k2 When k1 and k2 are both roots of the auxiliary equation, the general
solution is simply xn = Ak1n + Bk2n .
Repeated real roots k = −a/2 Just like in second-order differential equations, the general solution
when the auxiliary equation has repeated roots is xn = (A + Bn)k n .
p
Complex roots k = p ± iq Writing r = p2 + q 2 and θ = arctan q/p, the roots become k = re±iθ , in
which case the general solution is xn = rn (A cos nθ + B sin nθ).
Example 3.4 (Fibonacci numbers). The difference equation xn+2 = xn+1 + xn or xn+2 − xn+1 − xn = 0
with the condition x0 = x1 = 1 describes the Fibonacci

numbers 1, 1, 2, 3, 5, 8, . . . . The auxiliary equation
2 1± 5
in this case is k − k − 1 = 0, with roots k = 2 . Plugging these roots into our general solution
 √ n  √ n
gives xn = A 1+2 5 + B 1−2 5 . The condition x0 = x1 = 1 gives us the particular solution
 √ n+1  √ n+1
xn = √15 1+2 5 − √15 1−2 5 .

In the inhomogeneous case, i.e. when f (n) 6= 0, we start by finding the solution to the homogeneous
case, the complementary function, e.g. xn = Ak1n + Bk2n . To this we then add a particular solution that
gives f (n) on the right-hand side which, again, we find by “inspired guesswork”. Given a form of f (n),
we guess a general form of the solution, substitute it in and equate coefficients.
3 For a function f : R → R, the Taylor expansion around the point t is
h2 00
f (t + h) = f (t) + hf 0 (t) + f (t) + · · ·
2!
MA133 Differential Equations 7

Solution forms to guess are as follows:

f (n) Try solution xn =


m
cn where m ∈ N P (n)
ck n (and k is not a root) Ck n
ck n (and k is a root) Cnk n
ck n (and k is a repeated root) Cn2 k n

Here P (n) is a general polynomial of degree m.

Example 3.5. Find the general solution of xn+1 − 2xn + xn−1 = 8.


We first solve the homogeneous equation yn+1 −2yn +yn−1 = 0; this has auxiliary equation k 2 −2k+1 =
0 and hence repeated root k = 1, hence the complementary function is yn = A + Bn. Since xn = C or
xn = Cn solve the homogeneous case, we try xn = Cn2 ; this yields C(n + 1)2 − 2Cn2 + C(n − 1)2 = 8;
expanding and cancelling yields C = 4. Hence the general solution is xn = A + Bn + 4n2 .

3.2 Nonlinear First-Order Difference Equations


So far we have only looked at linear difference equations. Solving nonlinear difference equations, even
first-order ones, can be extremely difficult. We consider a general autonomous first-order difference
equation, xn+1 = f (xn ) (autonomous meaning that it doesn’t depend on n). Picking an initial value x0
we see that xn = f n (x0 ), where f 2 (x) := f (f (x)) and f n (x) := f (f n−1 (x)). To study the behaviour of
such equations, we consider its fixed points:

Definition 3.6. A fixed point of xn+1 = f (xn ) is a point x∗ such that f (x∗ ) = x∗ .

Note the difference between difference equations and differential equations; for differential equations
fixed points are found by solving dx
dt = f (x) = 0, while for difference equations we want f (x) = x.

Example 3.7. The linear example xn+1 = kxn has solution xn = x0 k n . This has a fixed point at
x0 = 0; when |k| < 1, xn → 0, so we call 0 a stable fixed point; when |k| > 1, 0 is an unstable fixed
point.

In general, using the Taylor expansion it can be shown that if |f 0 (x∗ )| < 1, we have a stable fixed
point, while for |f 0 (x∗ )| > 1 we have an unstable fixed point. For f 0 (x∗ ) = ±1 the system is structurally
unstable, since a small change to f could tip it to being either stable or unstable.

Example 3.8. The logistic equation is xn+1 = λxn (1 − xn ), where xn ∈ [0, 1] for all n. To find its
fixed points we solve x∗ = λx∗ (1 − x∗ ), yielding x∗ = 0 and x∗ = λ−1 λ as the two fixed points. Now
f 0 (x) = λ(1 − 2x), so f 0 (0) = λ and f 0 ( λ−1
λ ) = 2 − λ. Hence for 0 < λ < 1, 0 is a stable fixed point (and
as λ−1λ < 0 it is not a fixed point), while for 1 < λ < 3, 0 is an unstable fixed point and λ−1 λ is a stable
fixed point. (When 3 < λ ≤ 4, both fixed points are unstable, and much more complicated behaviour
occurs.)

4 Systems of First-Order ODEs


Many physical systems require the solution not of a single differential equation but rather the solution
of several “coupled” equations. An n × n system of first-order differential equations looks like

ẋ1 = f1 (x1 , x2 , . . . , xn )
ẋ2 = f2 (x1 , x2 , . . . , xn )
..
.
ẋn = fn (x1 , x2 , . . . , xn )

where each xi is a function of t. We can write this in vector form as ẋ = f (x), where x = (x1 , . . . , xn ),
ẋ = (ẋ1 , . . . , ẋn ) and f (x) = (f1 (x1 , . . . , xn ), . . . , fn (x1 , . . . xn )). Recall that the partial derivative of g
8 MA133 Differential Equations

∂g
with respect to xi , denoted by ∂xi
, is the result of differentiating g with respect to xi while treating all
other xj as constants. For a vector field f : Rn → Rn , we form the Jacobian matrix :
 
∂f1 ∂f1 ∂f1
...
 ∂x1 ∂x2 ∂xn 
 ∂f2 ∂f2 ∂f2 


 ∂x1 ∂x2 ... ∂xn 
Df = 
 .

 .. .. .. .. 
 . . . 

 
∂fn ∂fn ∂fn
∂x1 ∂x2 ... ∂xn

If each of the partial derivatives in the Jacobian matrix is sufficiently “nice”, then there exists a unique
solution to the initial value problem ẋ(t) = f (x), x(t0 ) = x0 for some interval a < t < b.

4.1 Homogeneous Linear 2 × 2 Systems with Constant Coefficients


Consider systems of the form

dx
= px + qy
dt
dy
= rx + sy.
dt
   
x p q
Letting x = and A = , we can write this as ẋ = Ax. Appealing to previous solutions, we
y r s
try a solution of the form x = eλt v, which yields λeλt v = Aeλt v, or since eλt 6= 0, Av = λv. Hence
x = eλt v is a solution if λ is an eigenvalue of A and v is a corresponding eigenvector. So the form of the
solution to the 2 × 2 system above depends on the eigenvalues of A:

1. Distinct real eigenvalues In this case, we have two distinct eigenvalues λ1 and λ2 , with eigen-
vectors v1 and v2 respectively, then the solution to the system is x(t) = eλ1 t v1 + eλ2 t v2 .
(
ẋ = x + y
Example 4.1. Find the general solution of the coupled system
ẏ = 4x − 2y.
    
d x 1 1 x
Rewriting as a matrix equation gives = . Denoting the matrix on the right-
dt y 4 −2 y
hand side by A, we find   −
theeigenvalues as the solutions of det(A λI) = 0, which yields λ = 2, −3. An
1 1
eigenvector for λ = 2 is and an eigenvector for λ = −3 is . Hence the general solution is
1 −4
   
λ1 t 1 λ2 t 1
x(t) = ae + be ,
1 −4

where a and b are arbitrary constants (to take account of the fact that the eigenvector is not unique).

When A is non-singular (i.e. det A 6= 0), then ẋ = Ax = 0 if and only if x = 0. Thus x = 0 is


the only fixed point. We determine the stability of the fixed point by looking at the eigenvalues. If the
initial value x0 is a multiple of one of the eigenvectors, then the sign of the corresponding eigenvalue
determines the stability; negative is stable, positive is unstable. Every other initial condition is just some
linear combination of these. If both eigenvalues are negative, we call 0 a sink, and if both are positive
we call it a source.
Having found the eigenvalues and eigenvectors of A, we can form the matrix P = (v1 | v2 ) and
then P −1 AP is a diagonal matrix. Hence we can change coordinates using y = P −1 x to get ẏ1 = λ1 y1 ,
ẏ2 = λ2 y2 , and we can then solve these separately to get y1 = aeλ1 t , y2 = beλ2 t ; this is known as
decoupling the system.
MA133 Differential Equations 9

2. Complex eigenvalues It can be shown that, for a real matrix, eigenvalues (and their corresponding
eigenvectors) occur in complex conjugate pairs; so if λ = p + iq is an eigenvalue with corresponding
eigenvector v = v1 + iv2 , then so is λ = p − iq with v = v1 − iv2 . Hence, just as in the two real roots
case, we can write this as x(t) = ceλt v + ceλt v, which can be rearranged to

x(t) = ept [(a cos qt + b sin qt)v1 + (b cos qt − a sin qt)v2 ],

where λ = p + iq and v = v1 + iv2 .


(
ẋ = 5y + 2x
Example 4.2. Find the general solution of the coupled system
ẏ = −2x.
    
d x 2 5 x
Rewriting as a matrix equation gives = . Denoting the matrix by A, we
dt y −2 0 y    
5 0
find that its eigenvalues are λ = 1 ± 3i, with corresponding eigenvectors v1 ± iv2 = +i .
−1 3
Hence the general solution is
    
5 0
x(t) = et (a cos 3t + b sin 3t) + (b cos 3t − a sin 3t) ,
−1 3

where a and b are arbitrary constants.

Just as in the two real eigenvalue case, the real part of the eigenvalues determines the stability of the
fixed point at the origin; if p = Re (λ) < 0 then (0, 0) is stable, but if p = Re (λ) > 0 then (0, 0) is unstable.
The imaginary part, which gives rise to the [(a cos qt + b sin qt)v1 + (b cos qt − a sin qt)v2 ] part, makes the
solutions spiral; the sign of q decides the direction in which it spirals (positive is clockwise, negative is
p q
anticlockwise). We can see this by diagonalising A using4 P = (v1 | v2 ), to get P −1 AP = −q p , and
then changing to polar coordinates, which decouples the system to ṙ = pr, θ̇ = −q, which has solution
r(t) = aept , θt = −qt + c.

3. Repeated real eigenvalues When we only have one eigenvalue λ and one corresponding eigenvec-
tor5 v, then we only have one solution eλt v. Trying a more general solution x(t) = eλt a + teλt b for any
vectors a, b yields the equations (A − λI)a = b and (A − λI)b = 0. So we take b = v, our eigenvector,
and then find6 some u such that (A − λI)u = v, giving us the solution

x(t) = aeλt v + beλt (u + tv).


(
ẋ = 5x − 4y
Example 4.3. Find the general solution of the coupled system
ẏ = x + y.
    
d x 5 −4 x
Rewriting as a matrix equation gives = . Denoting the matrix on the
dt y 1 1 y
right-hand
  side by A, we find that its only eigenvalue to be λ = 3, with corresponding
  eigenvector
2 1
v = . We thus need some u such that (A − λI)u = v; one such vector is u = . Hence the
1 0
general solution is    
λt 2 λt 1 + 2t
x(t) = ae + be ,
1 t
where a and b are arbitrary constants.
4 Note that P is not a matrix of eigenvectors, since the eigenvectors are v1 ± iv2 .
5 Itmay happen that there are two linearly independent eigenvectors corresponding to the same eigenvalue; in this case
the matrix will be a multiple of the identity and so the system will already be decoupled.
6 This is the process of finding a Jordan chain; forming the matrix P = (u | v) yields the near-diagonal Jordan canonical
 
λ 1
form P −1 AP = . This will be covered in MA251 Algebra I: Advanced Linear Algebra.
0 λ
10 MA133 Differential Equations

Phase Diagrams In each of the cases mentioned above, we may draw phase portraits. To construct a
phase portrait for a system of coupled equations we must rewrite the system in the form ẋ = Ax. Once
we have the matrix A we can find its eigenvalues and eigenvectors as above. If we have real eigenvalues
then we plot their eigenvectors as lines in the (x, y)-plane. If the corresponding eigenvalue is negative
then we draw arrows on the line towards the origin (stable direction). If it is positive we draw the arrows
away (unstable direction). Then we ‘fill in’ the space between the eigenvectors. Notice that the sign of
the eigenvalues determine the direction of the arrows on the eigenvectors, and the magnitude determines
the size of the arrows. Below are some examples7

Now if the eigenvalues are complex then we have eigenvalues of the form λ± = α ± iβ where α, β ∈ R.
Then we need only concern ourselves with the real part α. If α > 0 then the origin is unstable. If α < 0
then the origin is stable. Here are some examples8 on how to draw phase portraits depending on the
real part of the imaginary eigenvalues.

α = .55 α=0 α = −.35

Second-Order Differential Equations Given a second-order ODE with constant coefficients, such
as aẍ + bẋ + cx = 0, we can set y = ẋ and rewrite this as the coupled system

(
ẋ = y
 
0 1
or, in matrix form: ẋ = x.
ẏ = − ac x − ab y − ac − ab

The eigenvalues of this system solve aλ2 + bλ + c = 0, i.e. the auxiliary equation!

7 These examples where created using the Wolfram Demonstrations Project which I suggest you all go out and try

http://demonstrations.wolfram.com/PhasePortraitAndFieldDirectionsOfTwoDimensionalLinearSystems/
8 Again, these were made in a Wolfram Demonstrations Project
MA133 Differential Equations 11

4.2 Nonlinear Systems


Directional Fields Given a coupled system of nonlinear differential equations, we may wish to draw
a directional field for it. This is a graphical representation of the problem which allows us to make
qualitative judgements about the solution without having to analytically solve the problem. So given a
system such as

ẋ = −x − 2x2 y + y
ẏ = −x − y
We can plot at each point (x, y), an arrow which represents the value of ẋ = (ẋ, ẏ). The length of the
arrow is proportional to the magnitude of ẋ and it is in the direction (ẋ, ẏ). Having drawn the arrows at
each point, we can consider different starting positions being initial conditions and from this particular
(x0 , y0 ), we can follow the path of arrows tangentially to see how this particular initial condition behaves.
The direction field for the example above is give below.

0.8

0.6

0.4

0.2

−0.2

−0.4

−0.6

−0.8

−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

Nonlinear 2 × 2 Systems (Most of this section is preparation for MA134 Geometry and Motion,
and is unlikely to be examined.) We can show that, near to a fixed point x∗ where ẋ = 0 at x∗ , we can
approximate a nonlinear system
dx
= f1 (x, y)
dt
dy
= f2 (x, y)
dt
 
∂f1 ∂f1
 ∂x (x∗ ) ∂y (x ∗ )
using the Jacobian matrix Df (x∗ ) =  . That is, close to the fixed point x∗ , the

∂f2 ∂f2
∂x (x∗ ) ∂y (x ∗ )
solutions behave like ẋ = Df (x∗ )x. This allows us to determine the stability of fixed points, and hence
to sketch its phase portrait.

Closing Remarks
As you can see, there’s not all that much material to Differential Equations, but there is scope for getting
a bit confused, particularly due to all the different cases for solutions. Memorise them as best you can
and be prepared to handle any differential equation that’s thrown at you. Books like James Robinson’s
excellent An Introduction to Ordinary Differential Equations and William Boyce and Richard DiPrima’s
classic Elementary Differential Equations and Boundary Value Problems have literally hundreds of ques-
tions for those wanting practice, and practising lots of questions is the only way to do well. So practise,
practise, practise, and good luck in the exam!

Potrebbero piacerti anche