Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
MA133
Differential Equations
Revision Guide
WMS
ii MA133 Differential Equations
Contents
0 Introduction and Notation 1
1 First-Order ODEs 1
Introduction
This revision guide for MA133 Differential Equations has been designed as an aid to revision, not a
substitute for it. Differential Equations is very much an applied course, in which the emphasis is on
problem-solving rather than justifying every step. So, the best way to revise is to use this revision guide
as a quick reference for the theory, and to just keep trying example sheets and past exam questions.
Hopefully this guide should give you some confidence by showing you that there isn’t too much to the
module: at least not as much material as it appears from the wad of lecture notes and assignments you
have amassed.
Disclaimer: Use at your own risk. No guarantee is made that this revision guide is accurate or
complete, or that it will improve your exam performance. Use of this guide will increase entropy,
contributing to the heat death of the universe. Contains no GM ingredients. Your mileage may vary.
All your base are belong to us.
Authors
Written by D. S. McCormick (d.s.mccormick@warwick.ac.uk).
Based upon lectures given by Dave Wood at the University of Warwick, 2005 and 2006. Further additions
were made in 2012 by Matthew Lee (matthew.lee@warwick.ac.uk) following advice from Jess Lewton.
Many thanks to The Ghost Readers’ efforts to ensure these notes were as accurate as possible.
Any corrections or improvements should be entered into our feedback form at http://tinyurl.com/WMSGuides
(alternatively email revision.guides@warwickmaths.org).
MA133 Differential Equations 1
• Variables measure things – the dependent variables depend on the independent variables; for exam-
ple, for a moving particle, as time (the independent variable) passes, the position (the dependent
variable) changes, which we can express as x = x(t). We are interested in rates of change, i.e. the
dependent variable(s) differentiated with respect to the independent variable(s); for our example
we might consider dx
dt .
dy
• A differential equation is an equation relating variables and their derivatives, e.g. dx = f (x, y).
• If the equation has only one independent variable, it is called an ordinary differential equation or
dy
ODE. When it’s obvious what this independent variable is, we use the notation y 0 for dx ; if the
dx
independent variable is time, we use the notation ẋ for dt .
• If it has more than one independent variable, for example u(x, y), this has two partial derivatives,
∂u ∂u
∂x and ∂y ; a differential equation involving this kind of function is called a partial differential
equation or PDE 1 .
• The order of an differential equation is the order of the highest derivative in the equation, for
d2 y
example dx 2 = −y is second-order.
• A linear ODE of order n depends linearly on the dependent variable and its derivatives, i.e. it can
be written in the form
dn y dy
an (t) n + · · · + a1 (t) + a0 (t)y = f (t),
dt dt
dy 2
and does not involve nonlinear functions of the dependent variable such as y 2 or ( dx ) or sin y.
1 First-Order ODEs
dy
The simplest ODEs are those of the form dx = f (x). We can solve these using the Fundamental Theorem
of Calculus (FTC):
Rx dg
Theorem 1.1 (FTC). For f : [a, b] → R, define g(x) = a f (x̃) dx̃. Then dx (x) = f (x). Furthermore,
Rb 0
a
f (x) dx = F (b) − F (a) for any F with F (x) = f (x) (i.e. any antiderivative of f ).
dy
R
This means that the general solution to dx = f (x) is y(x) = f (x) dx = F (x) + c, where F is any
antiderivative of f and c is a constant of integration. We specify a particular solution by saying what
F (x) is at a particular value of x; this kind of problem is known as an initial value problem.
Probably the most general initial value problem for first-order ODEs is
dx
(t) = f (x, t), x(t0 ) = x0 . (1)
dt
A solution of this ODE is a function x(t) with x(t0 ) = x0 and ẋ(t) = f (x, t).
In general, the initial value problem in equation (1) may not have a solution, or it may have more
than one. We will assume that f is sufficiently “nice2 ” to prevent such nasty things happening, in which
case the IVP will have exactly one solution.
Even if we are guaranteed that the solution exists, we cannot always find R t an explicit solution to a
−t2 −s2
given ODE. For example, solving dx dt = e using the FTC yields x(t) = x 0 + 0
e ds, where x(0) = x0 ;
this integral cannot be evaluated explicitly. Here, and elsewhere, qualitative solution methods become
as important as quantitative ones.
1 Weare not interested in PDEs here. These will be studied in detail in MA250 Introduction to PDEs.
2 The technical requirements are that f (x, t) and ∂f ∂x
(x, t) be continuous in some rectangle containing (x0 , t0 ). Don’t
worry if this is all a bit confusing; this is an applied course, so we don’t worry too much about technical details. The proof
will come in MA244 Analysis III.
2 MA133 Differential Equations
where A is a constant of integration. There are two parts to the solution: xp (t) is called the particular
solution, since it is particular to g(t), and xh (t) is called the complementary solution, since it is the
solution to the homogeneous case dx dt + r(t)x =
R 0.
As long as we can perform the integration r(t) dt, we can always find an expression for the solution;
R R r(t) dt
and if we can perform the integration e g(t) dt, that solution is explicit.
Example 1.2. As with most solution methods for ODEs, it is much better to be able to apply the
solution method than to memorise the solution formula by rote. So we now consider the ODE
dy
(x2 + 1) + 4xy = 12x
dx
dy
as an example. We first divide through by (x2 + 1) to get rid of the coefficient in front of the dx term,
getting
dy 4x 12x
+ y= 2 ,
dx x2 + 1 x +1
which is linear. The integrating factor is
Z
4x
exp dx = exp(2 log(x2 + 1)) = (x2 + 1)2 .
x2 + 1
So multiplying both sides by (x2 + 1)2 gives
dy d 2
(x2 + 1)2 + 4x(x2 + 1)y = (x + 1)2 y = 12x(x2 + 1).
dx dx
Integrating both sides gives (x2 + 1)2 y = 3(x2 + 1)2 + c, so the general solution is
c
y(x) = 3 + 2 ,
(x + 1)2
where c is a constant of integration.
MA133 Differential Equations 3
This assumes, however, that f (x) 6= 0 for all x. If f (x0 ) = 0 for some particular x0 , then the constant
function x(t) = x0 is also a solution of (3) since then dx
dt = 0 = f (x0 )g(t).
dy 2
Example 1.3. Solve the initial value problem = with the condition y(0) = 0.
dx sin y
1
Noting that sin y 6 0 for any y, we see there are no constant solutions. So separating variables gives
=
Z y Z x
sin ỹ dỹ = 2 dx̃.
0 0
d2 x dx
a(t) + b(t) + c(t)x = f (t). (4)
dt2 dt
The underlying assumption of linearity means that if x1 (t) and x2 (t) are two solutions of (4) then so is
αx1 (t) + βx2 (t) for any choice of α, β ∈ R. Furthermore, if the two solutions are linearly independent,
i.e. α1 x1 (t) + α2 x2 (t) = 0 =⇒ α1 = α2 = 0, then these solutions are necessary and sufficient to obtain
all solutions to (4). In addition to two linearly independent solutions, we need not one but two initial
conditions. In this case, the IVP
d2 x dx
a(t) + b(t) + c(t)x = f (t), x(t0 ) = x0 , ẋ(t0 ) = v0 , (5)
dt2 dt
has a unique solution if the functions a, b, c, f are “nice”; we will assume that they are. A general solution
to (4) will contain two arbitrary constants.
d2 x dx
a +b + cx = 0. (6)
dt2 dt
Trying a solution of the form x = ekt gives ak 2 ekt + bkekt + cekt = 0. Noting that ekt 6= 0 yields the
auxiliary equation ak 2 + bk + c = 0. For ekt to solve the equation, k must solve the auxiliary equation,
and the values of a, b and c determine how many roots the auxiliary equation has:
Case 1 – Two real roots k1 , k2 When the auxiliary equation ak 2 + bk + c = 0 has two real roots k1 ,
k2 , we have x1 (t) = Aek1 t and x2 (t) = Bek2 t as two linearly independent solutions. Hence the general
solution of (6) in this case is x(t) = Aek1 t + Bek2 t . As always, an example is worth a thousand theorems:
Case 2 – One repeated root k When the auxiliary equation has one repeated real root, we only
get one solution x1 (t) = Aekt . Using a method known as reduction of order yields the second solution
x2 (t) = Btekt , giving the general solution as x(t) = (A + Bt)ekt .
Case 3 – Two complex roots k = p ± iq As in the two real roots case, in the complex case the
general solution is x(t) = αe(p+iq)t + βe(p−iq)t . By using Euler’s formula we can write this in the
form x(t) = ept (A cos qt + B sin qt), where A and B are √
real; using trig identities we can write this as
x(t) = Cept cos(qt − φ), where φ = arctan B/A and C = A2 + B 2 .
The equation mẍ + cẋ + kx = 0 models a mass–spring system; the mass is m, the spring constant is k
and the friction damping is measured by c. This has four cases; if c = 0 there is no damping, but if c 6= 0,
then the number of roots of the auxiliary equation determines the kind of motion; for c2 − 4mk > 0 we
call the system underdamped, c2 − 4mk = 0 is critically damped and c2 − 4mk < 0 is overdamped.
d2 x dx
a +b + cx = f (t), (7)
dt2 dt
we note that by linearity if xp (t) is any solution of (7) and xh (t) is the general solution of the homogeneous
case (6), then the general solution of (7) is given by x(t) = xh (t) + xp (t), provided xp (t) is linearly
independent of xh (t). So to solve (7) we solve the homogeneous case to find the complementary function
xh (t), find a particular integral xp (t), and add the two solutions together. We find xp (t) by the method
of “inspired guesswork”; we try a general form, differentiate it and substitute in. Functions to “guess”
are as follows:
Here P (t), P1 (t), P2 (t) are general polynomials of degree n, i.e. a0 + a1 x + . . . an xn . In general, try a
particular integral of the same general form as f (t); if that solves the homogeneous case then multiply
by t until it doesn’t.
dx
= f (t, x), x(0) = x0 . (8)
dt
6 MA133 Differential Equations
Euler’s method is a method of obtaining approximate solution values at a discrete set of points. For this
method, we choose a small time step h and make the assumption that over that interval h the derivative
dx 3
dt is constant, and so, by the Taylor expansion
(We can ignore subsequent terms in the Taylor expansion since we are assuming that dxdt is constant over
d2 x
the small timestep h, and thus dt2 = 0.) Starting with x0 = x(0), we can substitute in to get
xn+1 = xn + hf (kh, xk ).
Definition 3.2. The order of a difference equation is the difference between the highest index of x and
the lowest, e.g. xn+5 − xn+3 = en+1 has order 2.
Under certain circumstances, we can find explicit solutions to difference equations, much like we could
with ODEs.
Example 3.3. If we have a simple first-order linear homogeneous difference equation xn+1 = kxn , by
iterating this we get the solution
In the homogeneous case, i.e. when f (n) ≡ 0, by analogy with first-order difference equations, we try
a solution of the form xn = Ck n for some k, giving us Ck n+2 + Cak n+1 + Cbk n = Ck n (k 2 + ak + b) = 0.
One trivial solution of this is k = 0, i.e. xn = 0 for all n. Alternatively, k is a root of the auxiliary
equation k 2 + ak + b = 0. By analogy with second-order differential equations, we have three cases
depending on the roots of the auxiliary equation.
Distinct real roots k1 , k2 When k1 and k2 are both roots of the auxiliary equation, the general
solution is simply xn = Ak1n + Bk2n .
Repeated real roots k = −a/2 Just like in second-order differential equations, the general solution
when the auxiliary equation has repeated roots is xn = (A + Bn)k n .
p
Complex roots k = p ± iq Writing r = p2 + q 2 and θ = arctan q/p, the roots become k = re±iθ , in
which case the general solution is xn = rn (A cos nθ + B sin nθ).
Example 3.4 (Fibonacci numbers). The difference equation xn+2 = xn+1 + xn or xn+2 − xn+1 − xn = 0
with the condition x0 = x1 = 1 describes the Fibonacci
√
numbers 1, 1, 2, 3, 5, 8, . . . . The auxiliary equation
2 1± 5
in this case is k − k − 1 = 0, with roots k = 2 . Plugging these roots into our general solution
√ n √ n
gives xn = A 1+2 5 + B 1−2 5 . The condition x0 = x1 = 1 gives us the particular solution
√ n+1 √ n+1
xn = √15 1+2 5 − √15 1−2 5 .
In the inhomogeneous case, i.e. when f (n) 6= 0, we start by finding the solution to the homogeneous
case, the complementary function, e.g. xn = Ak1n + Bk2n . To this we then add a particular solution that
gives f (n) on the right-hand side which, again, we find by “inspired guesswork”. Given a form of f (n),
we guess a general form of the solution, substitute it in and equate coefficients.
3 For a function f : R → R, the Taylor expansion around the point t is
h2 00
f (t + h) = f (t) + hf 0 (t) + f (t) + · · ·
2!
MA133 Differential Equations 7
Definition 3.6. A fixed point of xn+1 = f (xn ) is a point x∗ such that f (x∗ ) = x∗ .
Note the difference between difference equations and differential equations; for differential equations
fixed points are found by solving dx
dt = f (x) = 0, while for difference equations we want f (x) = x.
Example 3.7. The linear example xn+1 = kxn has solution xn = x0 k n . This has a fixed point at
x0 = 0; when |k| < 1, xn → 0, so we call 0 a stable fixed point; when |k| > 1, 0 is an unstable fixed
point.
In general, using the Taylor expansion it can be shown that if |f 0 (x∗ )| < 1, we have a stable fixed
point, while for |f 0 (x∗ )| > 1 we have an unstable fixed point. For f 0 (x∗ ) = ±1 the system is structurally
unstable, since a small change to f could tip it to being either stable or unstable.
Example 3.8. The logistic equation is xn+1 = λxn (1 − xn ), where xn ∈ [0, 1] for all n. To find its
fixed points we solve x∗ = λx∗ (1 − x∗ ), yielding x∗ = 0 and x∗ = λ−1 λ as the two fixed points. Now
f 0 (x) = λ(1 − 2x), so f 0 (0) = λ and f 0 ( λ−1
λ ) = 2 − λ. Hence for 0 < λ < 1, 0 is a stable fixed point (and
as λ−1λ < 0 it is not a fixed point), while for 1 < λ < 3, 0 is an unstable fixed point and λ−1 λ is a stable
fixed point. (When 3 < λ ≤ 4, both fixed points are unstable, and much more complicated behaviour
occurs.)
ẋ1 = f1 (x1 , x2 , . . . , xn )
ẋ2 = f2 (x1 , x2 , . . . , xn )
..
.
ẋn = fn (x1 , x2 , . . . , xn )
where each xi is a function of t. We can write this in vector form as ẋ = f (x), where x = (x1 , . . . , xn ),
ẋ = (ẋ1 , . . . , ẋn ) and f (x) = (f1 (x1 , . . . , xn ), . . . , fn (x1 , . . . xn )). Recall that the partial derivative of g
8 MA133 Differential Equations
∂g
with respect to xi , denoted by ∂xi
, is the result of differentiating g with respect to xi while treating all
other xj as constants. For a vector field f : Rn → Rn , we form the Jacobian matrix :
∂f1 ∂f1 ∂f1
...
∂x1 ∂x2 ∂xn
∂f2 ∂f2 ∂f2
∂x1 ∂x2 ... ∂xn
Df =
.
.. .. .. ..
. . .
∂fn ∂fn ∂fn
∂x1 ∂x2 ... ∂xn
If each of the partial derivatives in the Jacobian matrix is sufficiently “nice”, then there exists a unique
solution to the initial value problem ẋ(t) = f (x), x(t0 ) = x0 for some interval a < t < b.
dx
= px + qy
dt
dy
= rx + sy.
dt
x p q
Letting x = and A = , we can write this as ẋ = Ax. Appealing to previous solutions, we
y r s
try a solution of the form x = eλt v, which yields λeλt v = Aeλt v, or since eλt 6= 0, Av = λv. Hence
x = eλt v is a solution if λ is an eigenvalue of A and v is a corresponding eigenvector. So the form of the
solution to the 2 × 2 system above depends on the eigenvalues of A:
1. Distinct real eigenvalues In this case, we have two distinct eigenvalues λ1 and λ2 , with eigen-
vectors v1 and v2 respectively, then the solution to the system is x(t) = eλ1 t v1 + eλ2 t v2 .
(
ẋ = x + y
Example 4.1. Find the general solution of the coupled system
ẏ = 4x − 2y.
d x 1 1 x
Rewriting as a matrix equation gives = . Denoting the matrix on the right-
dt y 4 −2 y
hand side by A, we find −
theeigenvalues as the solutions of det(A λI) = 0, which yields λ = 2, −3. An
1 1
eigenvector for λ = 2 is and an eigenvector for λ = −3 is . Hence the general solution is
1 −4
λ1 t 1 λ2 t 1
x(t) = ae + be ,
1 −4
where a and b are arbitrary constants (to take account of the fact that the eigenvector is not unique).
2. Complex eigenvalues It can be shown that, for a real matrix, eigenvalues (and their corresponding
eigenvectors) occur in complex conjugate pairs; so if λ = p + iq is an eigenvalue with corresponding
eigenvector v = v1 + iv2 , then so is λ = p − iq with v = v1 − iv2 . Hence, just as in the two real roots
case, we can write this as x(t) = ceλt v + ceλt v, which can be rearranged to
Just as in the two real eigenvalue case, the real part of the eigenvalues determines the stability of the
fixed point at the origin; if p = Re (λ) < 0 then (0, 0) is stable, but if p = Re (λ) > 0 then (0, 0) is unstable.
The imaginary part, which gives rise to the [(a cos qt + b sin qt)v1 + (b cos qt − a sin qt)v2 ] part, makes the
solutions spiral; the sign of q decides the direction in which it spirals (positive is clockwise, negative is
p q
anticlockwise). We can see this by diagonalising A using4 P = (v1 | v2 ), to get P −1 AP = −q p , and
then changing to polar coordinates, which decouples the system to ṙ = pr, θ̇ = −q, which has solution
r(t) = aept , θt = −qt + c.
3. Repeated real eigenvalues When we only have one eigenvalue λ and one corresponding eigenvec-
tor5 v, then we only have one solution eλt v. Trying a more general solution x(t) = eλt a + teλt b for any
vectors a, b yields the equations (A − λI)a = b and (A − λI)b = 0. So we take b = v, our eigenvector,
and then find6 some u such that (A − λI)u = v, giving us the solution
Phase Diagrams In each of the cases mentioned above, we may draw phase portraits. To construct a
phase portrait for a system of coupled equations we must rewrite the system in the form ẋ = Ax. Once
we have the matrix A we can find its eigenvalues and eigenvectors as above. If we have real eigenvalues
then we plot their eigenvectors as lines in the (x, y)-plane. If the corresponding eigenvalue is negative
then we draw arrows on the line towards the origin (stable direction). If it is positive we draw the arrows
away (unstable direction). Then we ‘fill in’ the space between the eigenvectors. Notice that the sign of
the eigenvalues determine the direction of the arrows on the eigenvectors, and the magnitude determines
the size of the arrows. Below are some examples7
Now if the eigenvalues are complex then we have eigenvalues of the form λ± = α ± iβ where α, β ∈ R.
Then we need only concern ourselves with the real part α. If α > 0 then the origin is unstable. If α < 0
then the origin is stable. Here are some examples8 on how to draw phase portraits depending on the
real part of the imaginary eigenvalues.
Second-Order Differential Equations Given a second-order ODE with constant coefficients, such
as aẍ + bẋ + cx = 0, we can set y = ẋ and rewrite this as the coupled system
(
ẋ = y
0 1
or, in matrix form: ẋ = x.
ẏ = − ac x − ab y − ac − ab
The eigenvalues of this system solve aλ2 + bλ + c = 0, i.e. the auxiliary equation!
7 These examples where created using the Wolfram Demonstrations Project which I suggest you all go out and try
http://demonstrations.wolfram.com/PhasePortraitAndFieldDirectionsOfTwoDimensionalLinearSystems/
8 Again, these were made in a Wolfram Demonstrations Project
MA133 Differential Equations 11
ẋ = −x − 2x2 y + y
ẏ = −x − y
We can plot at each point (x, y), an arrow which represents the value of ẋ = (ẋ, ẏ). The length of the
arrow is proportional to the magnitude of ẋ and it is in the direction (ẋ, ẏ). Having drawn the arrows at
each point, we can consider different starting positions being initial conditions and from this particular
(x0 , y0 ), we can follow the path of arrows tangentially to see how this particular initial condition behaves.
The direction field for the example above is give below.
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
Nonlinear 2 × 2 Systems (Most of this section is preparation for MA134 Geometry and Motion,
and is unlikely to be examined.) We can show that, near to a fixed point x∗ where ẋ = 0 at x∗ , we can
approximate a nonlinear system
dx
= f1 (x, y)
dt
dy
= f2 (x, y)
dt
∂f1 ∂f1
∂x (x∗ ) ∂y (x ∗ )
using the Jacobian matrix Df (x∗ ) = . That is, close to the fixed point x∗ , the
∂f2 ∂f2
∂x (x∗ ) ∂y (x ∗ )
solutions behave like ẋ = Df (x∗ )x. This allows us to determine the stability of fixed points, and hence
to sketch its phase portrait.
Closing Remarks
As you can see, there’s not all that much material to Differential Equations, but there is scope for getting
a bit confused, particularly due to all the different cases for solutions. Memorise them as best you can
and be prepared to handle any differential equation that’s thrown at you. Books like James Robinson’s
excellent An Introduction to Ordinary Differential Equations and William Boyce and Richard DiPrima’s
classic Elementary Differential Equations and Boundary Value Problems have literally hundreds of ques-
tions for those wanting practice, and practising lots of questions is the only way to do well. So practise,
practise, practise, and good luck in the exam!