Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
E Q U AT I O N S
Erin P. J. Pearse
These notes follow Differential Equations and Boundary Value Problems (4ed) by C. H.
Edwards and D. E. Penney, but also include material borrowed freely from other sources.
This document is not to be used for any commercial purpose.
DEs describe how deterministic systems change; all physical laws derive from DEs.
Goals of this course:
Differential equations are typically solved via integration, as these operations are inverses
of each other. Although it may be difficult to integrate an expression, it is always easy to
check your result when you are finished — and the same is true for differential equations.
2 dy
Example 1.1.2. Check that y(x) = Cex solves the ODE dx = 2xy.
Substituting the given solution into the left side gives
dy d 2 2 d 2
= (Cex ) = Cex ( x2 ) = Cex · 2x = 2x · y.
dx dx dx
2 First-Order Differential Equations
To find the solution in the first place, we can use separation of variables (see §1.4):
dy
= 2xy
dx
dy
= 2xdx
y
Z Z
dy
= 2xdx
y
ln |y| + C1 = x2 + C2
ln |y| = x2 + C3 C3 := C2 − C1
2
|y| = ex +C3
2
y = Cex C = eC 3 .
Things to note:
• The presence of C in the solution indicates that you get an entire 1-parameter family
of solutions.
Caution! From now one, we abuse notation and let C have different meanings
from step to step.
For example, the above derivation would be written:
dy 2 2
= 2xy 7→ ln y = x2 + C 7→ y = ex +C
7→ y = Cex .
dx
dy
= 2xy, y(1) = 3.
dx
1.1 Differential equations and mathematical models 3
2 3 3 x2 2
3 = y(1) = Ce1 = Ce =⇒ C= =⇒ y(x) = e = 3ex −1 .
e e
2
−1
So the (particular) solution to the IVP is yp (x) = 3ex .
1 dy
Example 1.1.5 (Exercise #47). Verify that y(x) = c−x is a soln of dx = y 2 and solve
the IVP
dy
= y2 , y(1) = 2.
dx
To verify:
2
dy d d 1
= (c − x)−1 = −(c − x)−2 · (c − x) = = y2
dx dx dx c−x
1 1 3
2 = y(1) = =⇒ c−1= =⇒ c= ,
c−1 2 2
1 2
so yp (x) = 3 = 3−2x solves the IVP on I = (−∞, 32 ).
2 −x
Definition 1.1.6. The order of an ODE is the order of the highest derivative in it. For
example,
ex y 00 − y + 1 = x4 is 2nd-order, and
y
(y 00 )3 − y 0000 + y 0 = is 4th-order.
x
An nth -order ODE can always be written in general form: F (x, y, y 0 , y 00 , . . . , y (n) ) = 0.
Example 1.1.9 (Exercise #30). Suppose g is a function whose graph is normal to every
curve y = x2 + k, wherever they meet. Write a DE for y = g(x).
1 1
g ⊥ x2 + k ⇐⇒ g k− ⇐⇒ y0 = − .
x2 +k x2 +k
Any 1st-order ODE has a geometric interpretation like this: it is called a slope field and
we discuss it in §1.3.
Example 1.1.10 (Newton’s law of cooling). The temperature f (t) of a body at time t
changes at a rate proportional to the difference with the ambient temp:
df
= −k(f − a), (1.1.1)
dt
Such a differential equation (1.1.1) is obtained from experiments. Then the methods
developed in this course will show you how to obtain the solution
76
74
72
70
1 2 3 4 5
To verify the solution (1.1.2) given above, you need only check that it satisfies (1.1.1).
1.1 Differential equations and mathematical models 5
So we differentiate (1.1.2):
df d d d
= (Ce−kt + a) = C (e−kt ) + (a) = Ce−kt (−k)
dt dt dt dt
Example 1.1.11. Population dynamics: start with 1000 bacteria in a petri dish. Want
to know population at time t.
1. Observe that the population changes at a rate proportional to current size. Assuming
birth/death rate remains constant and space/food req’s are ignored, get
dP
= kP, k > 0.
dt
P (t) = 1000ekt .
You should verify that this is a solution. Note also that there is an initial condition
to check: P (0) = P0 = 1000.
3. Suppose that you determine that no more than m bacteria can live in the dish; then
this solution is unreasonable for large t. Suppose you discover that for P ≈ m, the
growth rate is also proportional to the amount of remaining room: m − P . The
revised model is
dP
= kP (m − P ), k > 0, m > 0.
dt
mP0
P (t) = ,
P0 + (m − P0 )e−kmt
This is true regardless of whether m > P0 or m < P0 , and this is reflected in the
graph (m = 2000 here).
2030
2020
2010
2000
1990
1980
20 40 60 80 100
You may have noticed that almost all the examples so far have involved exponential
functions. There is a good reason for this.
In linear algebra,
a b u1 au1 + bu2
Au = = = mess,
c d u2 cu1 + du2
Many things about A are easier to understand if you first decompose u in terms of the
eigenvectors of A, for example: A4 , A−1 .
d
For ODEs, ekx is an eigenvector of dx , for the eigenvalue λ = k:
d kx
e = kekx .
dx
So exponentials will be very useful, and you should learn to recognize immediately that
y 0 = ky has the solution y = ekx .
√
We’ll discuss Euler’s formula eix = cos x + i sin x later in the course. (Here i := −1.)
This close relationship between the exponential function and the trigonometric functions is
1.1 Differential equations and mathematical models 7
the reason why cos x and sin x will also appear very often. Just as ekx is an eigenvector
d
(or “eigenfunction”) of the differential operator dx , the trig functions cos x and sin x are
d 2
eigenfunctions of −( dx ) :
d2 d
− 2
(cos kx) = −(−k) (sin kx) = k 2 cos kx,
dx dx
but now with eigenvalue λ = k 2 . So trig functions will also be very useful, and you should
learn to recognize immediately that y 00 + k 2 y = 0 has solutions
d
This follows from the fact that dx is a linear operator :
d dy1 dy2
(ay1 + by2 ) = a +b .
dx dx dx
You learned this in the second week of your first calculus course as the “constant rule” and
“sum rule” for derivatives.
dy
Example 1.2.1. Solve the IVP dx = 2x + 3, y(1) = 2.
The right side of the ODE depends only on x, so use direct integration to find the general
solution:
Z
y(x) = (2x + 3)dx = x2 + 3x + C.
2 = y(1) = 12 + 3 · 1 + C =⇒ C = 2 − 4 = −2 =⇒ yp (x) = x2 + 3x − 2.
d2 y
= 2x + 3, y(1) = 2, y 00 (1) = 1.
dx2
The right side of the ODE depends only on x, so use direct integration again to find the
general solution. To see that nothing changes for the 2nd-order case, let v := y 0 so v 0 = y 00
and the ODE can be rewritten
dv
= 2x + 3.
dx
1.2 Integrals as general and particular solutions 9
x3 3
y(x) = + x2 + C1 x + C2 .
3 2
So this general solution depends on the two parameters C1 and C2 (which can vary
separately). To find the particular solution,
13 3
2= + 12 + C1 · 1 + C2 and 1 = 12 + 3 · 1 + C1
3 2
11
2= − 3 + C2 ←− C1 = −3
6
19
C2 =
6
x3
So the particular solution is yp (x) = 3 + 23 x2 − 3x + 19
6
dv
=a =⇒ v(t) = at + C1
dt
dx at2
= at + C1 =⇒ x(t) = + C1 t + C2 .
dt 2
v0 = v(0) = a · 0 + C1 =⇒ C 1 = v0 , and
a · 02
x0 = x(0) = + C1 · 0 + C2 =⇒ C2 = x0 ,
2
at2
so the particular solution is xp (t) = 2 + v0 t + x 0 .
10 First-Order Differential Equations
Soft landing means v = 0 when x = 0, where x(t) is height. Let t = 0 be when rockets
are fired, so
So lander is at temporary standstill at time t = 180 seconds after firing. Want this to
occur when at the surface (x(t) = 0), so
(−2.5)(180)2
0 = x(180) = + 450 · 180 + x0 =⇒ x0 = 40500.
2
So fire rockets at 40.5km from surface for a soft landing 3min later.
Example 1.2.4. What is the maximum height of a ball launched straight up at v0 = 138
from the ground?
Max height occurs when v changes sign from positive to negative (i.e., when v = 0), so
−9.8(15)2
x(15) = + 138(15) + 0 = 967.5.
2
1.2.2 Currents
A river (or other conduit) with radius b (so width is 2b), where water flows fastest down
the middle at rate v0 . If the river is [−b, b] (so 0 is the middle), then river velocity is:
x2
vR (x) = v0 1 − 2 .
b
move
x2
dy vR v0
n6 endO = tan α = = 1− 2 .
n
v=vtotalnnnn
dx v S v S b
nnnn vR
nnαn
start /
vS
Example 1.2.5. For a river 200m wide with midstream velocity v0 = 9km/h and swimmer
velocity vS = 3km/h,
x2 x2 x3
dy 9
= 1− 2 =3 1− =⇒ y(x) = 3x − +C
dx 3 .1 .01 .03
(−.1)3 4
0 = y(−0.1) = 3(−.1) − +C =⇒ C=
.03 15
So when you are at x ∈ [−b, b], you have been washed downstream a distance of
3 3 4
y(x) = 3x − x + .
100 15
rise f (x, y)
m= = = f (x, y),
run 1
so a slope field is really just assigning a slope to each point in the plane.
dy 1
Example 1.3.2. = y
dx 2
3
-1
-2
-3
-3 -2 -1 0 1 2 3
y
, {x, −3, 3}, {y, −3, 3}, VectorStyle → Arrowheads[0] , Plot Table k2 ex/2 , {k, −6, 6} , {x, −3, 3} )
(Mathematica: Show VectorPlot 1, 2
dy
Example 1.3.3. =x−y
dx
3
-1
-2
-3
-3 -2 -1 0 1 2 3
It should be clear from these examples that a slope field is just a vector field where the
horizontal component of each vector is the same.
Definition 1.3.4. For a given ODE, integral curves (or solution curves) are curves which
are tangent to the specified slope at every point.
1.3 Slope fields and solution curves 13
These curves are the trajectories of particular solutions to the ODE, and this is the
starting point for finding numerical solutions to the IVP
dy
= f (x, y), y(x0 ) = y0 .
dx
(i) Start at (x0 , y0 ) and move one increment in the direction (1, f (x0 , y0 )).
(ii) Call this new location (x1 , y1 ) and move one increment in the direction (1, f (x1 , y1 )).
(iii) Repeat.
This will trace out an approximate integral curve and yield a numerical solution to the
IVP. (Note: in practice, there are much better methods than this elementary approach.)
-1
-2
-3
-3 -2 -1 0 1 2 3
Mathematica:
Show [VectorPlot[{1, x + y}, {x, −3, 3}, {y, −3, 3}, VectorStyle → Arrowheads[0]],
Plot [Table [−1 − x + kex , {k, −5, 5}] , {x, −3, 3}] , ListPlot[{{0, 0}, {−1, 0}, {−2, 1}, {−3, 2}, {−4, 3}}, PlotStyle → {Red, PointSize[Large]}]]
∂
Theorem 1.3.6. Suppose f (x, y) and ∂y f (x, y) are continuous on a rectangle (or disk)
centered at (a, b). Then for some open interval I containing a, the IVP
dy
= f (x, y), y(a) = b
dx
dy
Example 1.3.7. dx = 2y.
0
Since f = 2y and f = 2 are continuous on all of R, the theorem guarantees a unique
solution somewhere. (In fact, y(x) = Ce2x is ok everywhere).
dy
Example 1.3.8. dx = y 2 , y(0) = 1.
Since f = y 2 and f 0 = 2y are continuous on all of R, the theorem guarantees a unique
1
solution somewhere. In fact, y(x) = 1−x , which only works on I = (−∞, 1).
dy √
Example 1.3.9. dx = 2 y.
Since f 0 = √1
y has a discontinuity at y = 0, all bets are off. This is why there can be the
two solutions mentioned above.
dy
Example 1.3.10 (§1.1, #48). x dx = 4y.
4y
Since f = x and f 0 = 4
x 6 0, the theorem guarantees a unique
are continuous wherever x =
solution near any point where x 6= 0. After passing through a point on the y-axis, however,
all bets are off.
The general solution of this ODE is y(x) = Cx4 . Suppose x0 = 0.
(ii) If y0 = 0, then 0 = y(0) = C · 04 holds for any C; thus the IC fails to determine C.
3 10
5
1
0
-3 -2 -1 1 2 3
-1
-5
-2
-3
-10
-3 -2 -1 0 1 2 3
HW §1.3: #1, 3, 5, 6, 7, 8, 10, 13, 15, 22, 23, 29, §1.1 #48
Except for 13, 15, and 29, you can do all these with DFIELD (see p.31):
math.rice.edu/∼dfield/dfpp.html
1.4 Separable equations 15
When the right side is factored like this, the variables can be separated so that all ys
appear on the left and all xs appear on the right:
y0
y 0 = g(x)h(y) 7→ = g(x).
h(y)
1
Putting u = h and using the chain rule backwards,
u(y)y 0 = g(x)
d
(u(y(x))) = g(x)
Z dx Z
d
(u(y(x))) dx = g(x) dx
dx
Z
u(y(x)) = g(x) dx.
At this point, one typically tries to solve for y. This is not always possible, in which case,
you just leave it in this form.
Many use the following “nonsense menmonic” which sweeps the chain rule under the
rug. It does not make sense, strictly speaking, as it relies on abuse of notation. But it is
simpler and works out in the end:
Z Z
dy dy
= g(x)h(y) 7→ = g(x) dx 7→ u(y) dy = g(x) dx
dx h(y)
dy
Example 1.4.2. dx = −6xy, y(0) = 7.
This ODE is separable:
Z Z
dy dy
= −6x dx 7→ = −3 2x dx
y y
ln |y| = −3x2 + C
2
y = Ce−3x .
2
7 = y(0) = Ce0 = C 7→ y(x) = 7e−3x .
16 First-Order Differential Equations
dy 4−2x
Example 1.4.3. dx = 3y 2 −5 , y(1) = 3. This ODE is separable:
Z Z
(3y 2 − 5) dy = (4 − 2x) dx 7→ (3y 2 − 5) dy = (4 − 2x) dx
y 3 − 5y = 4x − x2 + C.
This equation is not readily solved for y, so leave it in this form and continue with the IC:
27 − 15 = 4 − 1 + C =⇒ C = 9,
-2
-4
-4 -2 0 2 4
Example 1.4.4 (Population P (t)). If β ≥ 0 is birth rate and δ ≥ 0 is death rate (both
constant) in # per organism per time. During a short time ∆t, have βP (t)∆t births and
δP (t)∆t deaths, so
∆P dP
∆P ≈ (β − δ)P (t)∆t =⇒ lim = (β − δ)P (t) =⇒ = kP,
∆t→0 ∆t dt
for k = β − δ.
Example 1.4.5 (Compound interest A(t)). If A(t) is the amount in a fund earning r
annually (continuously compounded), then during any short time period of length ∆t, you
have ∆A = rA(t)∆t interest added to the balance, and
dA ∆A
= lim = rA.
dt ∆t→0 ∆t
1.4 Separable equations 17
1 ln 2
N = N eδ5700 =⇒ δ= ≈ 0.001216.
2 5700
ln 2 ln 0.63
N = 0.63N0 = N0 e− 5700 t =⇒ t=− 5700 ≈ 3800.
ln 2
Suppose you are draining a tank containing volume V (t) of water at time t, and the level
(depth) of water in the tank is y(t). The drain hole has an area of a. The velocity of the
√
water particles exiting through the drain is v = c 2gy, where c ∈ [0, 1] is an empirical
constant. We want an ODE for y(t). For simplicity, take c = 1. Then
dV p p
= −av = −a 2gy = −ky 1/2 , for k = a 2g.
dt
This ODE relates the composite functions V = V (y(t)) and y = y(t). To get this into a
solvable form (all in terms of y), we use the chain rule to simplify:
dV dV dy
= .
dt dy dt
p dV dV dy dy dy p
−a 2gy = = = A(y) =⇒ A(y) = −a 2gy
dt dy dt dt dt
Example 1.4.8. A spherical tank with radius 48 is half full at tim t0 = 0. A circular
drain at the bottom with diameter 2 is opened. How long until empty?
Cross-sectional area is A(y) = πr2 = π(2304 − (48 − y)2 ) = π(96y − y 2 ), so Torricelli:
dy p
A(y) = −a 2gy
dt
dy p
π(96y − y 2 ) = −π · 12 2(9.8)y 1/2
dt
Z
1/2 3/2
√ Z
(96y − y ) dy = − 19.6 dt
2 √
64y 3/2 − y 5/2 ) dy = − 19.6t + C
5
√ 3 2 √ 2
IC: 48 = y(0) = 64(4 3) − (4 3) = C =⇒ C ≈ 14898.4.
5
√
0 = − 19.6t = 14898.4 =⇒ t ≈ 3473.21seconds = 57 : 53.
Example 1.4.9 (#64). A 12-hour clepsydra (water clock) is designed to be 4ft tall, have
circular cross-sections, with radius monotonically increasing to 1ft (at the top). Find the
curve y = f (x) and the radius r of the circular drain at the bottom, so that the water level
falls at the constant rate of 4inches/hour.
From the statement of the problem, the height of the water satisfies
dy
y(t) = 48 − 4t =⇒ = −4.
dt
Let R(y) be the radius of the tank at height y. Then Torricelli’s law gives
p r2 p rp
πR(y)2 (−4) = −πr2 2gy =⇒ R(y)2 = 2gy =⇒ R(y) = 4
2gy
4 2
rp 24 p x4
12 = R(48) = 4
2g48 =⇒ r= √ =⇒ R(y) = 2 4 27y, or y(x) = 432 .
2 4
96g
HW §1.4: #47, 51, 59, 64, 65, Ap 1.4, §2.1: #21, 23, 24, 26
1.5 Linear equations 19
a1 x1 + a2 x2 + · · · + an xn ,
a1 x1 + a2 x2 + · · · + an xn = c.
fn (x)y (n) + · · · + f2 (x)y 00 + f1 (x)y 0 + g(x)y = h(x) nth -order linear ODE
1 y
, e , sin y.
y
dy dy
So for example, y dx = 3 is nonlinear because it can be written dx = 3 y1 .
Definition 1.5.3. A singular solution of an ODE is a particular solutions which does not
come from the general solution.
dy
Example 1.5.4 (§1.1, #47). dx = y 2 , y(0) = 0.
1
The general solution of the ODE is y(x) = C−x , as can easily be checked. however,
1 1
0 = y(0) = =
C −0 C
has no solution. Nonetheless, one can see that y ≡ 0 satisfies the ODE. This function is a
singular solution.
20 First-Order Differential Equations
-3 -2 -1 1 2 3
-2
-4
-6
Singular solutions can only arise in nonlinear ODEs; the appearance of y 2 in the
previous example is responsible for the presence of a singular solution.
dy
Example 1.5.5. dx = 6x(y − 1)2/3 .
This ODE is nonlinear because there is a y term with exponent 6= 1. However, it is
separable:
(y − 1)−2/3 dy = 6x dx
Z Z
1
(y − 1)−2/3 dy = 2x dx
3
(y − 1)1/3 = 3x2 + C
One can also see that y ≡ 1 satisfies the ODE. This function is a singular solution. Note
dy
that if we write dx = 6x(y − 1)2/3 = f (x, y), then
∂f 6x
= √ ,
∂y 3
y−1
-3 -2 -1 1 2 3
-2
-4
-6
Generally, nonlinear equations are much harder, and so this course will focus on linear
ODEs, with a few exceptions in §1.6.
1.5 Linear equations 21
dy
+ P (x)y = Q(x).
dx
dy
+ P (x)y = Q(x), y(x0 ) = y0
dx
P (x)dx dy
R R R
e + e P (x)dx P (x)y = e P (x)dx Q(x)
dx
d h R i R
y(x)e P (x)dx = e P (x)dx Q(x)
dx Z R
R
y(x)e P (x)dx = e P (x)dx Q(x) dx
R
Z R
y(x) = e− P (x)dx e P (x)dx Q(x) dx. (1.5.1)
This is valid whenever P, Q are integrable (over any finite interval), which is ensured by
the fact that they are continuous. So the unique solution is given by (1.5.1).
Method of solution: Rather than memorizing the formula (1.5.1), you should go through
the following steps every time:
dy
(1) Write the ODE in the standard form dx + P (x)y = Q(x).
R
P (x)dx
(2) Compute the integrating factor ρ(x) = e (don’t forget the e).
d
(4) Recognize the left side as dx [y(x)ρ(x)] . (This is a good double-check)
R
(5) Integrate the equation to get y(x)ρ(x) = ρ(x)Q(x) dx + C.
dy
Example 1.5.7. (x2 + 1) dx + 3xy = 6x.
dy 3x 6x
(1) dx + (x2 +1) y = (x2 +1) .
22 First-Order Differential Equations
3 2
3x 31 3 3
ln(x2 + 1), so ρ(x) = e 2 ln(x +1)
= (x2 + 1)3/2
R R
(2) (x2 +1) dx = u 2 du = 2 ln |u| = 2
dy
(3) (x2 + 1)3/2 dx + (x2 + 1)1/2 3xy = (x2 + 1)3/2 (x26x
+1)
d 2
(4) dx [y(x)(x + 1)3/2 ] = 6x(x2 + 1)1/2
R√
(5) y(x)(x2 + 1)3/2 = 3 x2 + 12x dx = 3 u1/2 du = 3 23 u3/2 = 2(x2 + 1)3/2 + C
R
C
(6) y(x) = 2 + (x2 +1)3/2
R
dρ P dx d
R
Why does the method work? Consider it in reverse: note that dx =e dx ( P dx) =
ρ(x)P (x), so
d
[y(x)ρ(x)] = ρ(x)Q(x)
dx
dy
ρ(x) + ρ(x)P (x)y(x) = ρ(x)Q(x)
dx
y 0 + P (x)y = Q(x).
This is like coming across a derivative problem where someone has simplified their answer
by cancelling something from both sides. In order to solve the problem, you first have to
restore the cancelled factor ρ(x).
A solution flows into a tank at a constant rate ri and out at a constant rate ro . “Solution”
means there is some amount of solute (salt?) per unit of volume of fluid. Both of these
can change over time (mixing is instant).
V (t) is constant iff ri = ro . Otherwise, V (t) changes linearly (i.e., at a constant rate),
since ri , ro are assumed to be constants.
NOTE: ri , ro describe change in solution volume, not solute quantity!
1.5 Linear equations 23
dx x(t)
= ri ci − ro co = ri ci − ro ,
dt V (t)
dx ro ro
+ x = ri ci , P (t) = , Q(t) = ri ci .
dt V (t) V (t)
Method of solution:
1. Use the given data to find P (t) and Q(t). For P (t), this usually means finding V (t)
first.
Example 1.5.8. Erie has volume 480km3 , and 350km3 /yr flows in from Huron and out
through Ontario. Erie has pollutant levels 5x that of Huron. How long until it is only 2x?
ci = c = concentration in Huron
x0 = x(0) = 5cV.
dx
(1) dt = rc − r Vx =⇒ dx
dt + r
V x = rc. Here, r
V = 350
480 and rc = 350c.
r r r
ρ(t) = e V t
R R
(2) P dt = V dt = V t =⇒
r r r
(3) e V t dx
dt + e
V t
r
V x=e
V t rc
d r r
(4) V t] = e V t rc
dt [x(t)e
r r rc r r
(5) x(t)e V t = e V t rc dt = e V t + C = cV e V t + C
R
r/V
r
(6) x(t) = cV + Ce− V t
r
5cV = x(0) = cV + C =⇒ C = 4cV =⇒ x(t) = cV + 4cV e− V t
24 First-Order Differential Equations
r
2cV = cV + 4cV e− V t
1 r
= e− V t
4
V 480
t = ln 4 = ln 4 ≈ 1.901yrs.
r 350
Example 1.5.9. A 120-gallon tank contains 90 lbs of salt dissolved in 90 gal of water.
Brine containing 2lb/gal salt flows in at 4gal/min, and the tank drains at 3gal/min. How
much salt is in the tank when it is full?
Given: V (t) = V + ri − ro = 90 + 4t − 3t = 90 + t
x(t) x(t)
ci = 2, co = =
V (t) 90 + t
x0 = x(0) = 90
dx 3x(t) dx 3
(1) dt = ri ci − ro co = 4 · 2 − 90+t , so dt + 90+t x = 8.
3
3
= 3 ln |90 + t|, so ρ(t) = eln |90+t| = (90 + t)3 .
R R
(2) P dt = 90+t dt
d
(4) dt [(90 + t)3 x] = 8(90 + t)3
C
(6) x(t) = 2(90 + t) + (90+t)3
C C
90 = x(0) = 2(90 + 0) + =⇒ −90 = =⇒ C = −904
(90 + 0)3 903
904
x(t) = 2(90 + t) −
(90 + t)3
1.5 Linear equations 25
904
x(30) = 2(120) − = 240 − 37.97 ≈ 202lbs of salt.
1203
dy dz dy
(1) Put z = z(y) = y 0 = dx so that y 00 = dz
dx = dy dx = z 0 · z and the ODE becomes
F (y, z, zz 0 ) = 0.
R
(2) If you can solve this for z, then integrate dy to get
Z Z Z
dx 1 1
x(y) = dy = dy
dy = dy.
dy dx
z
=⇒ These are the only methods we’ve seen so far for 2nd-order ODE, so anything
with a y 00 is going to need one of these two methods.
Also, recall that solving a 2nd-order involves 2 integrations, so there will be 2 constants
(parameters) appearing.
dz 2
xz 0 + 2z = 6x =⇒ + z = 6.
dx x
1.6 Substitution methods and exact equations 27
2
This is linear, with P (x) = x and Q(x) = 6, so
2
R
ρ(x) = e x dx = e2 ln |x|+C = x2
dz
x2 + 2xz = 6x2
dx
d 2
x z = 6x2
dx Z
x2 z = 2 3x2 dx = 2x3 + A,
so z = y 0 = 2x + A
x2 , and y(x) = x2 + A
x +B .
-3 -2 -1 1 2 3
-2
Plot for B = 0: -4
(For B 6= 0, the other plots are
-6
vertical shifts of these.)
Example 1.6.2. yy 00 = (y 0 )2 .
No x, so use z = y 0 , zz 0 = y 00 so that y becomes the independent variable:
dz z
yzz 0 = z 2 =⇒ = ,
dy y
which is separable:
Z Z
dz 1
= dy =⇒ ln |z| = ln |y| + C =⇒ z = Celn |y| = Cy.
z y
dx 1 1
Now dy = z = Cy , so
Z Z
dx 1
dy = dy
dy Cy
x(y) = C1 ln y + C2
1
ln y = (x − C2 )
C1
y(x) = eAx+B = BeAx . (A := 1
C1 , B := − C
C1 )
2
1.6.2 “v-substitution”
dy
Example 1.6.3. dx = (x + y + 3)2 .
dy dv
For v = x + y + 3, you’d have y = v − x − 3, so dx = dx − 1, and substitution gives
dv
− 1 = v2 substitute on both sides
dx
dv
= v2 + 1 ODE is now separable
Z dx Z
dv
= dx
1 + v2
arctan v = x + C
v = tan(x + C)
x + y + 3 = tan(x + C) back-substitution of y
y = tan(x + C) − x − 3.
Method of solution:
dy
1. Start with dx = f (x, y).
2. Rewrite f (x, y) = g(x, v), where v = α(x, y) can be solved for y: y = β(x, v).
dy ∂β dv ∂β
3. Chain/product rule gives dx = ∂v dx + ∂x .
dy dv
In the example, α(x, y) = x + y − +3, β(x, v) = v − x − 3, and dx = dx − 1.
∂β 0 ∂β
4. Solve ∂v v + ∂x = g(x, v).
In particular:
dy
• dx = F (ax + by + c) can always be transformed into a separable equation via
v = ax + by + c (and y 0 = 1b (v 0 − a)).
dy
• dx = F ( xy ) can always be transformed into a separable equation via v = xy .
dy
Definition 1.6.4. An ODE is homogeneous iff it can be written dx = F ( xy ). Informally,
such an equation depends only on the ratio of x to y, rather than on both separately.
1.6 Substitution methods and exact equations 29
y
When using the subsitution v = x to make a homogeneous equation separable,
y dy dv
α(x, y) = , β(x, v) = vx, =x + v,
x dx dx
so y = xv gives
dv dv dv dx
x + v = F (v) =⇒ x = F (v) − v =⇒ = .
dx dx F (v) − v x
dy
Example 1.6.5. 2xy dx = 4x2 + 3y 2 .
This is
dy x 3y 2 3 y
=2 + = + v for v = .
dx y 2x v 2 x
dy dv
So y = vx and dx = v + x dx :
dv 2 3
v+x = + v
dx v 2
dv 2 v v2 + 4
x = + =
Z dx Zv 2 2v
2v dx
dv =
v2 + 4 x
ln(v 2 + 4) = ln |x| + C
v 2 + 4 = eln |x|+C = Cx
y2
+ 4 = Cx
x2
y 2 + 4x2 = Cx3 .
Note that solutions defined for x > 0 must have C > 0 and vice versa.
dy
Definition 1.6.6. A Bernoulli equation is a 1st-order ODE of the form dx + P (x)y = Q(x)y n .
Although n is often an integer, this is not necessary.
1 dy 1 1 1 n
v = y 1−n =⇒ y = v 1−n =⇒ = v 1−n −1 = v 1−n
dx 1−n 1−n
30 First-Order Differential Equations
dy
Example 1.6.8. x dx = −6y + 3xy 4/3 .
dy
This is dx + 6 xy = 3y 4/3 , so Bernoulli with n = 43 .
4 1 dy dv
v = y 1− 3 = y − 3 =⇒ y = v −3 =⇒ = −3v −4
dx dx
dv dv 2
−3xv −4 + 6v −3 = 3xv −4 =⇒ − v = −1,
dx dx x
2
R
ρ(x) = e (− x )dx
= x−2
dv 2 1
x−2 − 3 =− 2
dx x x
d −2 1
x v =− 2
dx x
Z
1 1
x−2 v = − dx = + C
x2 x
1
y − 3 = v = x + Cx2
1
y(x) =
(x + Cx2 )3
Recap: v-substitution can transform an ODE into separable or linear; special cases:
dy
• homogeneous: dx = F ( xy ), use v = xy .
dy
• Bernoulli: dx + P (x)y = Q(x)y n , use v = y 1−n .
dy
Example 1.6.9. 2xe2y dx = 3x4 + e2y .
This is not separable, linear, homogeneous, or Bernoulli.
Try v = e2y . Then
1 dy 1 dv
y= ln v, and = .
2 dx 2v dx
1.6 Substitution methods and exact equations 31
R 1 1
ρ(x) = e (− x )dx
= e− ln x =
x
1 dv 1
− v = 3x2
x dx x2
d 1
v = 3x2
dx x
Z
1
v = 3x2 dx = x3 + C
x
e2y = v = x4 + Cx
1
y(x) = ln |x4 + Cx|.
2
∂F ∂F
dF = dx + dy.
∂x ∂y
dF ∂F ∂F dy
(This is similar to implicit differentiation: if y = y(x), then dx = ∂x + ∂y dx .)
∂F ∂F
= M (x, y), and = N (x, y). (1.6.1)
∂x ∂y
Definition 1.6.11. An ODE in differential form is exact iff it is the exact differential of
some F = F (x, y); that is, iff M dx + N dy = dF for some F .
32 First-Order Differential Equations
∂M ∂N
Theorem 1.6.12. M dx + N dy = 0 is exact iff ∂y and ∂x are contin on all of R2 , and
∂M ∂N
= .
∂y ∂x
Proof. (⇒) If the ODE is exact, then there is an F as in (1.6.1), so by Clairaut’s theorem,
∂M ∂ ∂F ∂ ∂F ∂N
= = =
∂y ∂y ∂x ∂x ∂y ∂x
(⇐) Now suppose the partials are continuous and equal. Then it can be checked that
Z Z Z
∂
F (x, y) := M (x, y)dx + N (x, y) − M (x, y) dy
∂y
satisfies (1.6.1).
F KK
ssss KK ∂
∂
ss KK ∂y
∂x
ss KK
s KK
sy s %
∂F ∂F
M (x, y) = ∂x ∂y = N (x, y)
∂ ∂
∂y ∂x
∂M ∂ ∂F ∂ ∂F ∂N
∂y = ∂y ∂x ∂x ∂y = ∂x
dy
Example 1.6.13. y 3 + 3xy 2 dx = 0.
In differential form, this is
y 3 dx + 3xy 2 dy = 0,
so
∂M ∂N
M = y3 and N = 3xy 2 =⇒ = 3y 2 =
∂y ∂x
∂F
shows it is exact. To find F , you know ∂x = M = y 3 , so
Z Z
F (x, y) = M dx = y 3 dx = xy 3 + C(y)
∂F ∂C
= 3xy 2 + = N = 3xy 2 ,
∂y ∂y
1.6 Substitution methods and exact equations 33
which implies Cy = 0 and hence C(y) = C, an honest constant. Thus the solution is
F (x, y) = xy 3 = C .
∂M ∂N
M = 6xy − y 3 and N = 4y + 3x2 − 3xy 2 =⇒ = 6x − 3y 2 = ,
∂y ∂x
∂F
so it is exact. To find F , you know ∂x = M = 6xy − y 3 , so
Z Z
F (x, y) = M dx = (6xy − y 3 ) dx = 3x2 y − xy 3 + C(y)
∂F
= 3x2 − 3xy 2 + Cy = N = 4y + 3x2 − 3xy 2 ,
∂y
F (x, y) = 3x2 y − xy 3 + 2y 2 = C .
If you look at the formula given for F in the second part of the proof of the theorem,
you can see that it comes from
R
F = M dx
o7 SSS
dx oooo SSS ∂y
∂
SSS
R
o
ooo SSS
SS)
ooo
∂F ∂
∂F
R
M (x, y) = ∂x ∂y = N (x, y) = ∂y M dx
dy x−y−1
= .
dx x+y+3
dv u−v
=
du u+v
34 First-Order Differential Equations
2u = 2x + 2 2v = 2y + 4
u−v =x−y−1
=⇒ u=x+1 and v =y+2
u+v =x+y+3
du = dx dv = dy
Thus,
dv u−v
=
du u+v
v dv dz
Put z = u so uz = v and du = z + u du , so
dz u − uz u(1 − z)
z+u = =
du u + uz u(1 + z)
dz u(1 − z) 1+z 1 − 2z − z 2
u = −z =
du u(1 + z) 1+z 1+z
Z Z
1+z du
dz =
1 − 2z − z 2 u
1 2
ln(z + 2z − 1) − ln C = ln |u| + D
2
u2 (z 2 + 2z − 1) = C
2
v v
u2 + 2 − 1 =C
u2 u
v 2 + 2uv − u2 = C
y 2 + 2xy − x2 + 2x + 6y = C
where each Pi , pi is a function of x only. These can be constant functions, or even 0 (but
if Pn ≡ 0, then it isn’t nth -order).
Definition 3.1.1. A nth -order linear ODE is homogeneous iff F (x) ≡ 0 (f (x) ≡ 0).
Theorem 3.1.2. If the functions p0 (x), p1 (x), . . . , pn−1 (x), f (x) in (3.1.1) are continuous
on an open interval I containing a, then for any choice of numbers b0 , b1 , . . . , bn−1 , there
exists a unique solution to (3.1.1) that satisfies the initial conditions y(a) = b0 , y 0 (a) =
b1 , . . . , y (n−1) (a) = bn−1 .
3.1.1 Linearity
Dx = 0, Dy = 0 =⇒ D(ax + by) = 0.
For the homogeneous equation, any linear combination of solutions is again a solution.
Dx = f, Dy = 0 =⇒ D(x + by) = f.
If you add a homogeneous solution to a particular solution, you get another solution.
Now suppose x and y are both functions of t ∈ R, and D is a differential operator. For
example, suppose
d2 d
D= 2
+ p(t) + q(t).
dt dt
3.1 nth -Order Linear ODEs 37
This means:
d2 d2 x
d dx
Dx = 2
+ p(t) + q(t) x = 2
+ p(t) + q(t)x(t) ,
dt dt dt dt
so that
The definition of linear is written precisely so that an ODE is linear iff the
corresponding differential operator is linear.
In other words, Dx = f is a linear ODE iff and D(ax + by) = aDx + bDy , for any x, y.
3.1.2 Superposition
y100 = ( dx
d
cos 2x)0 = (−2 sin 2x)0 = −4 cos 2x =⇒ y100 + y1 = −4 cos 2x + 4 cos 2x = 0
y100 = ( dx
d
sin 2x)0 = (2 cos 2x)0 = −4 sin 2x =⇒ y200 + y2 = −4 sin 2x + 4 sin 2x = 0.
(c1 cos 2x + c2 sin 2x)00 = c1 (cos 2x)00 + c2 (sin 2x)00 = −4c1 cos 2x − 4c2 sin 2x
We will soon see a theorem that says every soln is of this form.
Suppose y(0) = 3 and y 0 (0) = −2. (Remember: 2nd-order ODE, so need 2 ICs)
c1 y1 + · · · + cn yn = 0 =⇒ c1 = c2 = · · · = cn = 0.
c2 cn
c1 y1 + c2 y2 + · · · + cn yn = 0 =⇒ y1 = − y2 − · · · − yn ,
c1 c1
(i) Any collection containing the zero function z(x) ≡ 0 is considered dependent.
(ii) If there are only two functions, then they are linearly independent iff neither is a
constant multiple of the other.
(iii) If a collection is dependent and you add something to it, the enlarged collection will
also be dependent.
(iv) If a collection is independent and you remove an element of it, the reduced set will
also be independent.
It is clear by inspection that yp = 3x is a solution of this equation, and we’ve seen that
y1 = cos 2x and y2 = sin 2x are solutions of the complementary equation. The theorem
tells us that if {cos 2x, sin 2x} is linearly independent, then the solution will be of the form
for some choice of c1 , c2 . Check that there is no c ∈ R such that c sin 2x = cos 2x (special
case (ii)): Suppose there were such a c. Note that y1 (0) = 0 but cy2 (0) 6= 0 unless c = 0.
But c 6= 0 because y1 is not the zero function.
Differentiating gives
⇐⇒ Ac = y0 .
(Of course, each yi has to have (n − 1) continuous derivatives for this to make sense.)
x
xex x2 ex x2
e 1 x
x 3x
ex (1 + x) xex (2 + x) =e 1
e (1 + x) x(2 + x)
x
ex (2 + x) ex (2 + 4x + x2 ) (2 + 4x + x2 )
e 1 (2 + x)
= 2e3x > 0.
Remark 3.1.13. “n-dimensional” means that n is the most you can have in an independent set.
Important implication 1: The solution space of a nth -order linear ODE is n-
dimensional, so you can never have more than n linearly independent solutions.
For example, suppose that y1 , y2 , y3 are all solutions of a 2nd-order ODE.
Then {y1 , y2 , y3 } cannot linearly independent.
But it can happen that {y1 , y2 } and {y1 , y3 } and {y2 , y3 } are each linearly independent.
Important implication 2: Given an nth -order linear IVP (so an nth -order ODE
and n ICs), you may not be able to find the solution if you don’t have n independent
complementary solutions to work with!
Remark 3.1.14. For functions, it does not make sense to talk of linear independence at a
point, only on an interval. As long as the functions are all defined at x0 , you can obviously
find a solution of ay1 (x0 ) + by2 (x0 ) = 0.
Remark 3.1.15. Also, the statement “y1 is independent” does not make sense, but it is
okay to say “{y1 , y2 } is independent” or “y1 is independent of {y2 , y3 }”. Independence
only makes sense for more than one function.
42 Linear Equations of higher order
dk rx
e = rk erx , k = 0, 1, 2, . . . , n.
dxk
ay 00 + by 0 + cy = 0. (3.3.2)
ar2 + br + c erx = 0
ar2 + br + c = 0
There are three cases for the roots of the characteristic equation, depending on the
discriminant of √
−b ± b2 − 4ab
r= .
2a
y 00 + 5y 0 + 6y = 0, y(0) = 2, y 0 (0) = 3.
r2 + 5r + 6 = (r + 2)(r + 3) = 0 =⇒ r1 = −2, r2 = −3
y(0) = c1 + c2 = 2
=⇒ c2 = −7, c1 = 9,
y 0 (0) = −2c1 − 3c2 = 3
Method of solution: When the roots r1 , . . . , rk of the characteristic equation are all
distinct, the complementary solution includes y = c1 er1 x + · · · + ck erk x .
y 00 + 4y 0 + 4y = 0.
r2 + 4r + 4 = (r + 2)2 = 0,
44 Linear Equations of higher order
so the roots are r1 = r2 = −2. Therefore one solution is y1 = e−2x . To find a linearly
independent set, we need a second solution which is not a multiple of y1 . The second
solution can be found in several ways. We use the following idea:
Assume that the other solution is of the form v(x)y1 (x). Then
y = v(x)e−2x
y 0 = v 0 (x)e−2x − 2v(x)e−2x
v 00 = 0
v(x) = c1 x + c2
The second term is the solution we had previously, but the first term is something new: on
the extra credit, you’ll see that {e−2x , xe−2x } is linearly independent.
HW §3.1: #34, 35, Appl. 3.1 and §3.3: #24, 25, 26, 28
Helpful tip for finding roots: The only possible rational roots of a polynomial are
the divisors of the constant term. So you can (i) evaluate the polynomial at these numbers
to find a root a, then (ii) divide the characteristic polynomial by (r − a) to reduce its order
(polynomial long division).
r3 + r2 − 4r − 4 = 0.
The only possible roots are the integer divisors of −4, so: ±1, ±2, ±4.
3.3 Homogeneous equations with constant coefficients 45
Since y1 is a solution, we know cy1 will also be a solution, for any constant c.
Generalize this by replacing c with a function v(x), then try to determine v(x) so that
v(x)y1 (x) is a solution to the ODE.
y 00 + py 0 + qy = 0.
(Note that y1 will be something you explicitly know; e.g., in the earlier example, y1 = e−2x .)
To find a second solution, let y = vy1 . Then
y 0 = v 0 y1 + vy10
y 00 = v 00 y1 + 2v 0 y10 + vy100
Since y1 is a solution of the original equation, the coefficient of v here is just 0, and it
reduces to
y1 v 00 + (2y10 + py1 )v 0 = 0.
The cancellation in (3.3.3) was not a fluke; this always happens. This is actually a first
order equation for the function v 0 ! So we have reduced a second-order equation to a
first-order equation.
Thus, it can be solved as a first order linear equation or a separable equation. Once
you have v 0 , you can get v by integrating, then multiply by y1 to get your new, linearly
independent solution.
46 Linear Equations of higher order
y = c1 e(−7+i)x + c2 e(−7−i)x .
but what in the heck is the exponential of an imaginary number? Using Euler’s Formula:
we can write
= c1 e−7x cos x + ic1 e−7x sin x) + c2 e−7x cos x − ic2 e−7x sin x
= c1 e−7x cos x + c2 e−7x cos x + ic1 e−7x sin x) − ic2 e−7x sin x
So we collected all the imaginary terms together and simplified the constants by setting
k1 := c1 + c2 and k2 := i(c1 − c2 ),
3.3 Homogeneous equations with constant coefficients 47
effectively removing the i from view. Is this allowed? Well, i is just a constant, so YES. If
you differentiate this solution, you will see that it works. Now the first initial condition
gives
2 = k1 cos 0 + k2 sin 0 =⇒ k1 = 2
and since
Despite the initial complex numbers, we have ended with a real-valued function! This
is invaluable for understanding what is going on with the system, graphing it, etc. For
example, as you might expect for a solution involving sines and cosines, there is some
oscillation going on.
0.01
0.005
0.5 1 1.5 2
-0.0032
Method of solution: If the characteristic polynomial has roots α ± β i, then the comple-
mentary solution includes y = eαx (c1 cos βx + c2 sin βx) .
If the pair of roots α ± β i appears k times, then the complementary solution includes
y = (1 + x + . . . xk−1 )eαx (c1 cos βx + c2 sin βx) .
Euler’s formula is
eaix = cos ax + i sin ax ,
and it is not too difficult to prove if we recall the power series definition of the exponential
function:
∞
X xn x2 x3 x4
ex := =1+x+ 2 + 3! + 4! + ...
n=0
n!
Notice that if we separate the even and odd terms, we get
2 4
3 5
= 1 + x2 + x4! + . . . + x + x3! + x5! + . . .
∞ ∞
X x2n X x2n+1
= + (= cosh x + sinh x)
n=0
(2n)! n=0 (2n + 1)!
∞ ∞
X (−1)n x2n X (−1)n x2n+1
cos x := sin x := .
n=0
(2n)! n=0
(2n + 1)!
∞
X (ix)n
eix =
n=0
n!
∞ ∞
X (ix)2n X (ix)2n+1
= +
n=0
(2n)! n=0
(2n + 1)!
∞ 2n 2n ∞ 2n+1 2n+1
X i x X i x
= +
n=0
(2n)! n=0
(2n + 1)!
∞ ∞
X (−1)n x2n X (−1)n x2n+1
= +i
n=0
(2n)! n=0
(2n + 1)!
= cos x + i sin x
This tiny and elegant formula ties together all five of the fundamental constants of
mathematics. This should blow your mind.
There is one final upshot in the case of complex conjugate roots: by combining the real
and imaginary parts of the answer, we can obtain a real-valued solution. This is invaluable
for studying the solution, graphing it, etc. Recall from the example:
dn
d
Ly = f, where L = an n + · · · + a1 + a0 .
dx dx
2. Differentiate your guess and substitute in to see if there are coefficients that work.
your guess for Y can have only finitely many linearly independent derivatives
So xex cos x has only finitely many linearly independent derivatives ... after a while, all
you see are linear combinations of the terms
1
However, this method would NOT work for f (x) = x because
and then add their solutions together. Let’s introduce some notation to talk about the
parts of the forcing functions: f (t) = f1 (t) + f2 (t) for f1 (t) := 3e2t and f2 (t) := 2 sin t.
Y1 = Ae2t
Y10 = 2Ae2t
Y100 = 4Ae2t
Y2 = A sin t
Y20 = A cos t
Y200 = −A sin t
sin t : −5A = 2 A = − 52
=⇒ blarg!
cos t : −3A = 0 A=0
Y2 = A sin t + B cos t
LY2 = (−A sin t − B cos t) − 3(A cos t − B sin t) − 4(A sin t + B cos t)
5
sin t : −5A + 3B = 2 A = − 17
=⇒
3
cos t : −3A − 5B = 0 B= 17
5 3
and Y2 (t) = − 17 sin t + 17 cos t is a solution for this subproblem.
Y = A cos 2t + B sin 2t
<
This cannot be solved because the right side is 0 and the left side is not. .
What happened?!? Let’s look at the complementary solution:
y 00 + 4y = 0 =⇒ r2 + 4 = 0 =⇒ r = ±2i,
so e±2it = e0 (cos 2t + i sin 2t) means we have yc = c1 cos 2t + c2 sin 2t. There is DUPLICA-
TION between the complementary solution and the proposed particular solution.
Y = At cos 2t + Bt sin 2t
LY = (−4A sin 2t − 4Bt sin 2t + 4B cos 2t − 4At cos 2t) + 4(At cos 2t + Bt sin 2t)
3.5 Undetermined coefficients 53
Remark 3.5.3. The fact that a purely oscillatory forcing function can lead to a solution
that also involves a linear term is CRUCIAL for some applications.
2. Make sure that the forcing function f (x) is a combination of products of polynomials,
exponentials, sines, and cosines. If not, use variation of parameters (coming next).
This method constructs solutions to the nonhomogeneous equation out of solutions to the
homogeneous equation. It can be applied to ANY linear nth -order ODE, including
ones with NONCONSTANT coefficients, but you may encounter unevaluatable
integrals.
and you have two linearly independent solutions {y1 , y2 } to Ly = 0, so your complementary
solution is yc = c1 y1 + c2 y2 . Now guess:
yp = u1 (x)y1 + u2 (x)y2 ,
where u1 and u2 are unknown functions. The hope is to find a solution by allowing the
parameters c1 , c2 in the complementary solution to vary (i.e., by replacing the constant ci
with the function ui ), hence the name “variation of parameters”.
Guess:
yp = u1 (x)x + u2 (x)x3
yp0 = u1 + 3u2 x2
u01 x + u02 x3 =0 u01 y1 + u02 y2 = 0
or
F (x)
u01 + 3u02 x2 2
= 2x sin x u01 y10 + u02 y20 =
P2 (x)
u02 = sin x
u01 + x3 sin x = 0,
so u01 = −x3 sin x. It remains to integrate, taking C = 0 each time (why is this okay?):
Z
u1 = − x3 sin x dx = x2 cos x − 2 sin x − 2 cos x
Z
u2 = sin x dx = − cos x
(2) By any means available, solve the system for u01 , u02 :
u01 y1 + u02 y2 = 0
F (x)
u01 y10 + u02 y20 =
P2 (x)
(2) Solve
Z Z
(3) u1 = (cos x − sec x)dx = sin x − ln | sec x + tan x| and u2 = sin x dx = − cos x.
(4) yp = cos x(sin x−ln | sec x+tan x|)−sin x cos x =⇒ yp = − cos x ln | sec x + tan x|
3.4 Oscillations
F = external force.
F
m
k c
switch
C
E L
R
Kirchhoff’s Law: sum of voltage drops around a circuit equals the applied voltage.
circuit element: inductor resistor capacitor
voltage drop: L dI
dt RI 1
CQ
If Q(t) is charge, then
1
LQ00 + RQ0 + Q = E(t), where L = inductance
C
R = resistance
C = capacitance
Since I = Q0 =current and people are usually more interested in current than charge,
differentiating the ODE gives
1
LI 00 + RI 0 + I = E 0 (t).
C
Definition 3.4.1. Free oscillation refers to the absence of an forcing function, i.e., F ≡ 0
or E ≡ 0.
We will see x(t) = C cos(ϕt − α) again, so it will be useful to look at this in detail:
A B p
A cos ϕt + B sin ϕt = C( cos ϕt + sin ϕt) C= A2 + B 2
C C
= C(cos α cos ϕt + sin α sin ϕt) α as in diagram
= C cos(ϕt − α) by trig
α
= C cos(ϕ(t − δ)) δ = = “delay”.
ϕ
C
B
a
A
d T
2π 2π
The oscillation is periodic with period T = ϕ , i.e., x(t + ϕ ) = x(t), for all t ∈ R.
To determine α, use the signs of A, B to determine which quadrant it is in:
tan−1 (B/A), A > 0, B > 0, 1st quadrant
α = π + tan−1 (B/A), A < 0, 2nd, 3rd quadrant
2π + tan−1 (B/A), A > 0, B < 0, 4th quadrant,
where tan−1 x ∈ (− π2 , π2 ).
3.4 Oscillations 59
equation is
√
2 2
p c2 − 4km
r + 2pr + ϕ = 0 =⇒ r = −p ± p2 − ϕ2 = −p ±
2m
√
Critical damping is given by ccr := 4km.
p
case c < ccr . The system is underdamped, with r = −p ± iψ for ψ = ϕ2 − p2 , so
In the overdamped and critically damped cases, the system passes through equilibrium
at most once. All the solutions are transient.
Definition 3.4.2. A solution x(t) is transient iff limt→∞ x(t) = 0, so it dies out.
Definition 3.4.3. For a function of the form x(t) = Ce−pt cos(ω(t − δ)), the amplitude
envelope is Ce−pt .
xc = c1 cos ϕt + c2 sin ϕt
F0 /m
xp (t) = cos ωt
ϕ2 − ω 2
F0 /m
so that x = xc + xp is a superposition of cosines. The amplitude envelope ϕ2 −ω 2 is a
constant function.
F0 /m
xp (t) = t sin ωt
2ϕ
F
The amplitude envelope 2mϕ t is not a bounded function.
Definition 3.6.1. Resonance occurs when the natural vibrations of the system are
reinforced by externally impressed vibrations at the same frequency, and this potentially
leads to oscillations of unbounded amplitude. In terms of ODEs, the forcing function
duplicates the homogeneous system, and this gives the solution another factor of t.
0.1t
0.1t cos t
-0.1t
3.6 Forced oscillations 61
Once again, there are three cases for the complementary solution
r t r t
c1 e 1 + c2 e 2 , c > ccr
xc (t) = (c1 + c2 t)e−pt , c = ccr
Ce−pt cos(ψ(t − δ)), c < c .
cr
p
with r = −p ± iψ for ψ = ϕ2 − p2 , and all are transient, so x(t) ≈ xp (t), for t >> 0.
(k − mω 2 )F0 cωF0
xp = A cos ωt + B sin ωt =⇒ A= , B=
(k − mω 2 )2 + (cω)2 (k − mω 2 )2 + (cω)2
a b
(Preview of coming attractions: look inside the front cover of your book for a2 +b2 , a2 +b2 .)
p F0
C= A2 + B 2 =⇒ C(ω) = p .
(k − mω 2 )2 + (cω)2
Note that if k > 0 and there is damping, i.e., if c > 0, then C(ω) is bounded:
q
(k − mω 2 )2 = 0 ⇐⇒ ω = k
m but (cω)2 = 0 ⇐⇒ ω = 0.
√
If c ≥ 2km (so the system is overdamped or critically damped), then C(ω) is a
monotonic decreasing function of ω.
√
If c < 2km (so the system is underdamped), then C(ω) attains a maximum at some
ω < ϕ. This is called practical resonance.
C(w)
Example 3.7.1. Consider an RLC circuit with R = 50Ω, L = 0.1H, and C = 5 × 10−4 F .
At time t = 0, I(0) = Q(0) = 0. Connect to 110V generator operating at 60Hz.
switch
C = 5×10-4F
E = 110
60Hz R = 50Ω L = 0.1H
1
LI 00 + RI 0 + I = E 0 (t) =⇒ 0.1I 00 + 50I 0 + 2000I = 13200π cos(120πt)
C
So the transient/complementary solution is Itr (t) = c1 e−44t + c2 e−456t and the steady
periodic part is
E0 −1 ωRC
Isp (t) = q cos(ωt − α), α = tan , 0≤α≤π
R2 + (ωL − 1 2 1 − LCω 2
ωC )
110
Isp (t) = cos(377t − 2.14568), α ≈ −0.995914 + π = 2.14568
59.98
q
1 1 2
Side note: ωL − ωC is called reactance and R2 + (ωL − ωC ) is called impedance.
Then
I(0) = c1 + c2 − 1.00374 = 0
=⇒ c1 = −0.307, c2 = 1.311
To see how rapidly the transient part dies out, note that
|Itr (0.2)| < 0.000047A =⇒ |I(t) − Isp (t)| < 0.000047, for t ≥ 0.2.
3.7 Electrical circuits 63
Example 3.7.2. A radio is an RLC circuit where R, L are fixed and C is adjustable.
Effectively, a signal broadcast at frequency ω is an input voltage E(t) = E0 cos ωt to the
tuning circuit. The resulting Isp in the tuning circuit drivers the amplifier, so what you
hear is proportional to
E0
I0 = q .
1 2
R2 + (ωL − ωC )
choose C to maximize I0 :
1 1 E0
ωL − =0 =⇒ C= =⇒ I0 = .
ωC Lω 2 R
Turning the dial sets the capacitor to this value. The text describes how modern AM
radios use a second capacitor set at around 455kHz higher than the desired frequency. The
resulting beat frequency is then amplified for better amplification and selectivity.
F0 /m
mx00 + kx = F0 cos ωt =⇒ x(t) = xtr (t) + cos ωt
ϕ2 − ω 2
F0 /m F0 /m (ϕ − ω)t (ϕ + ω)t
xsp (t) = (cos ωt − cos ϕt) = 2 sin sin
ϕ2 − ω 2 ϕ − ω2 2 2
(ϕ + ω)t F0 /m (ϕ − ω)t
x(t) = A(t) sin , A(t) := 2 2
sin
2 ϕ −ω 2
1.0
0.5
Definition 3.8.1. A boundary value problem (BVP) is an ODE together with conditions
on y(a) and y(b), for a < b. In this case, one looks for a solution to the ODE on the
interval I = (a, b) which satisfies the given boundary conditions (BC).
The BVP situation is very different from IVPs, and in particular, there is not a directly
analogous existence & uniqueness theorem.
√ √
y(x) = c1 cos( 3x) + c2 sin( 3x)
√
y(0) = 0 =⇒ c1 cos(0) = 0 =⇒ c1 = 0 =⇒ y(x) = c2 sin( 3x)
√
y(π) = 0 =⇒ c2 sin( 3π) = 0 =⇒ c2 = 0 =⇒ y(x) ≡ 0.
This is no big surprise, because for f ≡ 0 and b0 = b1 = 0, the only solution to the IVP
above is also y ≡ 0.
Definition 3.8.4. The choice y(a) = y(b) = 0 is called Dirichlet boundary conditions.
Another common choice is y 0 (a) = y 0 (b) = 0, which is called Neumann boundary conditions.
To determine under what conditions the BVP will have a solution, introduce a parameter
λ and consider
Definition 3.8.5. If λ is a value for which the BVP has a nontrivial solution, we say λ is
an eigenvalue of the problem, and that the corresponding solution y is an eigenfunction
associated to the eigenvalue λ. Note that if y is an eigenfunction for λ, then so is cy, for
any nonzero constant c.
y 00 + p(x)y 0 + λq(x)y = 0, X 00 + λX = 0,
=⇒
y(a) = 0, y(b) = 0. X(0) = X(b) = 0, b > a.
d2
The latter has a solution iff λ is an eigenvalue of the differential operator L = dx2 .
Example 3.8.6. Consider the following BVP with Dirichlet boundary conditions on an
interval of length L:
case λ < 0. Then we can write λ = −α2 for α > 0, so that y 00 − α2 y = 0. Then
y = Aeαx + Be−αx . The BC y(0) = 0 gives A = −B, and then y(L) = 0 gives
B <
0 = AeαL + Be−αL =⇒ e2αL = − =1 =⇒ 2αL = 0. .
A
So λ = 0 is not an eigenvalue.
nπ n2 π 2
yn (x) = sin x , λn = , n = 1, 2, 3 . . .
L L2
1 1 1 1
1
-0.5 5 4 3 2
-1.0
Example 3.8.7. Consider the following BVP with Robin (mixed) boundary conditions
on an interval of length L:
(2n − 1)2 π 2
(2n − 1)πx
yn (x) = sin , λn = , n = 1, 2, 3 . . .
2L 4L2
1.0
0.5
2 2 2 2
-0.5 9 7 5 3
-1.0
Method of solution
(1) Write the general solution as y = Ay1 (x, λ) + By2 (x, λ).
α1 (λ)A + β1 (λ)B = 0,
α2 (λ)A + β2 (λ)B = 0.
(3) The eigenvalues for the BVP are roots of the determinant, i.e., the solutions of
HW §3.8 #3–9
3.8 Boundary value problems 67
Suppose you have a beam of length L and the cross section is of uniform shape and uniform
density. Let y(x) denote the amount by which the beam is distorted downward, at the
point x. (So y ≥ 0.)
For deflections which are small enough that 1 >> (y 0 )2 , the curve is given by
EIy (4) = w,
where E is Young’s modulus, I is the moment of inertia for a cross section, and w is the
weight per unit length.
To solve:
EIy 000 = wx + C1
EIy 00 = 12 wx2 + C1 x + C2
EIy 0 = 61 wx3 + C1 2
2 x + C2 x + C3
1 4 C1 3 C2 2
EIy = 24 wx + 6 x + 2 x + C3 x + C4
4
y= w
24EI x + Ax3 + Bx2 + Cx + D.
The 4 constants of integration are determined by the (physical) boundary conditions: how
the beam is supported.
Support Boundary condition
simple y = y 00 = 0
Built-in y = y0 = 0
Free end y 00 = y (3) = 0
Example 3.8.8. Suppose you have a simply supported uniform steel rod of length 10m
and circular cross section of 3cm diameter. What is the maximum deflection of the beam?
From above,
EIy 00 = 12 wx2 + C1 x + C2
y 00 (0) = 0 : 0 = 12 w02 + C1 0 + C2 =⇒ C2 = 0
wL
y 00 (L) = 0 : 0 = 12 wL2 + C1 L =⇒ C1 = −
2
EIy 00 = 12 wx2 − 12 wLx
1 4 1 3
EIy = 24 wx − 12 wLx + C3 x + C4
1 4 1 3
y(0) = 0 : 0= 24 w0 − 12 wL0 + C3 0 + C4 =⇒ C4 = 0
y 00 (L) = 0 : 0= 1
24 wL
4
− 1
12 wLL
3
+ C3 L =⇒ C3 = 1
24 wL
3
w
y(x) = (x4 − 2Lx3 + L3 x).
24EI
L
By symmetry, the maximal deflection occurs at 2, so
g
From a handbook, you find that steel has density δ = 7.75 cm and Young’s modulus
g
E = 2 × 1012 cm·s2 , so
5004 10004
ymax = 1.97786 × 10−10 2
= 1.97786 × 10−10 2 = 5.49407cm.
1.5 6
HW §3.8 #13–18
Chapter 7
1
mx00 + cx0 + kx = f (t) or LI 00 + RI 0 + I = E 0 (t),
c
(ii) is injective. Let F (s) = L(f ) and G(s) = L(g). If F (s) = G(s), then f (t) = g(t).
L−1
(This will effectively allow us to invert the transform F −−−−→ f .)
d
(iii) converts differentiation to multiplication. L( dt f ) = sL(f ) = sF (s).
−1 −1 F (s)
x(t) = L {X(s)} = L (ii).
s2 + cs + k
for all s such that the integral converges, i.e., such that the limit exists:
Z b
lim e−st f (t) dt.
b→∞ 0
Note: it is good practice to keep track of the s values for which the integral converges.
Note: g(s, t) = est f (t) is a function on R × R. When the t is “integrated out”, you’re
left with a function of s ∈ R.
1
This limit exists only for s > 0, so F (s) = L(1) = , for s > 0.
s
F(s)
1 f(t)
t s
Z ∞ Z ∞ −(s−a)t ∞
e
F (s) = L{eat } = e−st eat dt = e−(s−a)t dt = −
0 0 s−a 0
1
This limit exists iff s − a > 0, so F (s) = L{eat } = , for s > a.
s−a
F(s)
f(t)
t a s
7.1 Laplace transforms and inverse transforms 71
Theorem 7.1.4. Suppose f (t) is differentiable and f (0) = limt→∞ f (t) = 0. Then
L{f 0 (t)} = sF (s) .
We will build a toolbox of Laplace transforms to take care of all the usual functions we
encounter: polynomials, trigonometric functions, exponentials, etc.
Γ(a + 1) n!
Theorem 7.1.5. For s > 0, L{ta } = , for a > −1, and L{tn } = n+1 , for
sa+1 s
n = 0, 1, 2, . . . .
Proof.
Z ∞ Z ∞
−st a 1 Γ(a + 1)
F (s) = L{t } = a
e t dt = e−u ua du = , for s, u > 0,
0 sa+1 0 sa+1
R∞
where Γ(x) = 0
e−t tx−1 dt is the Gamma function.
Γ(a + 1) n!
L{ta } = , a > −1 and so L{tn } = , n = 0, 1, 2, . . . ,
sa+1 sn+1
Theorem 7.1.6. For a, b ∈ R, L{af (t) + bg(t)} = aL{f (t)} + bL{g(t)} , whenever the
transforms exist.
Proof. HW.
Since any polynomials is a linear combination of monomials tn , this theorem and the
previous one give the Laplace transform of any polynomial.
√ √
HW §7.1 #3, 7–11, 20, 23, 27 and prove linearity. For t, use Γ( 12 ) = π.
72 Laplace Transform Methods
s k
Theorem 7.1.7. L{cos kt} = and L{sin kt} = 2 .
s2 + k 2 s + k2
Proof. Since
we can compute
1 ikt −ikt
1 1 1 1 s + ik s − ik s
L{cos kt} = 2 L{e } + L{e } = 2 + =2 2 2
+ 2 2
= 2 ,
s − ik s + ik s +k s +k s + k2
1 1 s + ik s − ik k
1
L{eikt } − L{e−ikt } = 21i = 21i
L{sin kt} = 2i − 2 2
− 2 2
= 2 ,
s − ik s + ik s +k s +k s + k2
Similarly,
ex + e−x s
cosh x = =⇒ L{cosh kt} =
2 s2 − k 2
ex − e−x k
sinh x = =⇒ L{sinh kt} =
2 s2 − k2
To deal with functions like cosn kt, use power-reduction trig formulas:
3 3 cos kt + cos 3t 3 s 1 s
L{cos kt} = L = · + · 2
4 4 s2 + k 2 4 s + 9k 2
Definition 7.1.8. f is piecewise continuous on (a, b) iff you can divide a = x0 < x1 <
x2 < · · · < xn = b with
(ii) the limits f (xi +) = limε→0+ f (xi + ε) and f (xi −) = limε→0+ f (xi − ε) exist (and
are finite).
a b
a=x0 x1 x2 x3 x4=b a b
The translates of the unit step function will also be useful: ua (t) := u(t − a). In
particular, note that
0,
t < a,
f (t)ua (t) =
f (t), t ≥ a.
e-st ua(t)
u(t)
1 1
t a t a t
e−st
Theorem 7.1.10. L{ua } = , for s > 0.
s
∞ ∞ −st ∞
e−sb e−sa e−st
Z Z
e
L{ua } = e−st ua (t) dt = e−st dt = − = lim + =
0 a s a b→∞ s s s
|f (t)|
lim = ∞.
t→∞ ect
Example 7.1.12. If f is bounded (|f (t)| ≤ M , for all t), then f is of exponential order
c = 0, so f (t) = O(e0 ) = O(1), as t → ∞.
f (t)
Example 7.1.13. If f is any polynomial, then limt→∞ et = 0, so every polynomial is of
t
exponential order c = 1, so f (t) = O(e ), as t → ∞.
2
Example 7.1.14. If f (t) = et , then f is not of exponential order because for any c,
2
f (t) et 2
lim = lim = lim et −ct = ∞,
t→∞ ect t→∞ ect t→∞
Proof. First, note that since f is piecewise continuous, it is bounded on any finite interval,
and hence |f (t)| ≤ M ect for t ≥ T if and only if |f (t)| ≤ M ect for t ≥ 0. So we can let
T = 0 and assume that |f (t)| ≤ M ect holds for all t ≥ 0, without loss of generality. Now
for s > c, we have
Z b Z b Z b Z ∞
−st −st −(s−c)t M
|e f (t)| dt ≤ |e ct
M e | dt = M e dt ≤ M e−(s−c)t dt = .
0 0 0 0 s−c
7.1 Laplace transforms and inverse transforms 75
R R
Since f (t) dt ≤ |f (t)| dt is always true, this implies that the Laplace transform exists:
Z
b Z b
M M
−st
|F (s)| = lim e f (t) dt ≤ lim |e−st f (t)| dt ≤ lim = .
b→∞ 0 b→∞ 0 b→∞ s − c s−c
s
Example 7.1.17. F (s) = s+1 is not the Laplace transform of any pw-contin function of
s
exponential order, because lims→∞ s+1 = 1.
a0 + a1 s + a2 s2 + an sn
More generally, F (s) = can only be a Laplace transform if
b0 + b1 s + b2 s2 + bn sm
n < m.
Example 7.1.18. The converse of the Theorem does not hold. For example, f (t) = t−1/2
fails to be piecewise continuous (at t = 0), but still has a Laplace transform
Γ( 21 )
r
π
L{t−1/2 } = = .
s1/2 s
but this requires a contour integral in the complex plane, so we’ll skip it.
HW §7.1 #
76 Laplace Transform Methods
Proof. HW.
dx
Example 7.2.2. Consider dt = −λx with x(0) = x0 .
First, rewrite this as the IVP
x0 + λx = 0, x(0) = x0 .
(i) Since L is linear, the inverse transform L−1 is also linear: L−1 {aF (s) + bG(s)} =
aL−1 {F (s)} + bL−1 {G(s)}. This allowed us to pull out the x0 .
(ii) We did not compute the inverse transform directly. Instead, we put it into the
form of a known transform. This is how we will always invert the transform: do
7.2 Transforming initial value problems 77
Theorem 7.2.3. Suppose f (n) is piecewise continuous and f (k) (t) = O(eck t ) as t → ∞,
for each of k = 0, 1, 2, . . . . Then
L{f (n) (t)} = sn L{f (t)} − sn−1 f (0) − sn−2 f 0 (0) − sn−3 f 00 (0) − · · · − sf (n−2) (0) − f (n−1) (0).
algebra
L−1
soln x(t) of IVP o soln X(s) of algebraic eqn
For step (3), this will often involve breaking a large rational function into smaller
components like
1 s k
, 2 2
, or 2 .
s−k s ±k s ± k2
=⇒ If you don’t remember partial fractions, do the review posted on D2L, now!!!
There are many further examples in §7.3.
Further techniques
1
Example 7.2.6 (Multiply by t, or multiply by eat ). Show that L{teat } = .
(s − a)2
Note that f (0) = 0 for f (t) = teat . Now use the product rule and take Laplace transforms:
1
(s − a)F (s) = collect terms
s−a
1
L{teat } = solve for F (s) = L{teat }.
(s − a)2
Interpretation/preview:
1
(i) If f (t) = t, then L{t} = s2 . So multiplying f by eat has the effect of translating F (s)
by a:
L{eat f (t)} = F (s − a) .
1
(ii) If f (t) = eat , then L{eat } = s−a . So multiplying f by t has the effect of ... what?
Does it square F (s)? Actually, we’ll see it differentiates F (s) (and multiplies by −1).
More generally,
These are both theorems that we’ll prove in the next section. First, let’s see another example
of how multiplying a function by t results in its Laplace transform being differentiated
(and negated). First:
(s2 + k 2 ) · 0 − k(2s)
d d k 2ks
− L{sin kt} = − =− = 2 .
ds ds s + k2
2 2
(s + k ) 2 2 (s + k 2 )2
2ks
Example 7.2.7 (Multiply by t). Show that L{t sin kt} = .
s2 + k 2
Note that f (0) = 0 for f (t) = t sin kt. Now use the product rule and take Laplace
transforms:
f 00 (t) = 2k cos kt − k 2 t sin kt product rule again (to recoup t sin kt)
Theorem 7.2.8 (Transforms of integrals). Suppose f (t) is pw-continuous and f (t) = O(ect )
as t → ∞. Then
Z t
F (s)
L f (τ ) dτ = 1s L(f (t)) = , for s > c.
0 s
Rt
Proof. Note that g(t) = 0
f (τ ) dτ is continuous, and that g 0 (t) = f (t) wherever f (t) is
continuous (FToC). Also,
Z t Z t Z t
M ct M ct
ecτ dτ ≤
|g(t)| =
f (τ ) dτ ≤ |f (τ )| dτ ≤ M (e − 1) < e ,
0 0 0 c c
1
Example 7.2.9. Suppose that L{g(t)} = G(s) = . Find g(t).
s2 (s − a)
Do this in two steps, corresponding to two applications of the theorem:
t t
eat − 1
Z Z
1 1
L−1 = L−1 dτ = eaτ dτ = ,
s(s − a) 0 s−a 0 a
so
t t
eaτ − 1 eat − at − 1
Z Z
−1 1 −1 1
L = L dτ = dτ = = g(t).
s2 (s − a) 0 s(s − a) 0 a a2
0.5 0.5
1 2 3 4 5 6 1 2 3 4 5 6
-0.5 -0.5
-1.0 -1.0
Fix a > 0. Then f (t − a) shifts the graph to the right and f (t + a) is a shift to the left.
1.0 1.0
0.5 0.5
1 2 3 4 5 6 1 2 3 4 5 6
-0.5 -0.5
-1.0 -1.0
Theorem 7.3.2. Suppose F (s) = L{f (t)} exists for s > c. Then:
0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5 3.0
Mathematica code:
ua [t ]:=Piecewise[{{1, t > a}}, 0]
h i
Plot u1/2 [t]e−(t− 2 ) , {t, 0, 3}, PlotStyle → {Thick}, AspectRatio → 0.3, Filling → Bottom
1
R∞
Proof of (iii). Note that L{ua (t)f (t − a)} = a
e−st f (t − a) dt. If you shift right, you
need to account for the fact that the function is considered to be equal to 0 for t < 0. Now
compute:
Z ∞
L{ua (t)f (t − a)} = e−st ua (t)f (t − a) dt
0
Z ∞
= e−st f (t − a) dt
a
82 Laplace Transform Methods
Z ∞
= e−s(u+a) f (u) du u = t − a, t = u + a, dt = du
Z0 ∞
= e−s(u+a) f (u) du
0
Z ∞
−sa
=e e−su f (u) du
0
−sa
=e F (s).
s
• L{eat cos kt}. Since L{cos kt} = s2 +k2 , this is
s−a
L{eat cos kt} = .
(s − a)2 + k 2
n!
• L{eat tn }. Since L{tn } = sn+1 , this is
n!
L{eat tn } = .
(s − a)n+1
n!
the L{f (at)} = a1 F s
• L{(kt)n }. Since L{tn } = sn+1 , a theorem gives
n! kn+1 n! n!
L{(kt)n } = 1
= =⇒ L{(kt)n } = k n .
k ( ks )n+1 k sn+1 sn+1
Of course, there is a simpler way to get this last result. What is it?
Example 7.3.3. Find the equation of motion for the unforced but damped oscillation
1
of a mass, spring, dashpot system with m = 2, c = 3, and k = 17, started from
x(0) = 3, x0 (0) = 1.
mx00 + cx0 + kx = 0
1 00
2x + 3x0 + 17x = 0
(s2 + 6s + 34)X(s) = 3s + 1 + 18
3s + 19
X(s) =
s2 + 6s + 34
7.3 Scaling and translation properties of the Laplace transform 83
Now s2 + 6s + 34 = (−3 − 5i)(−3 + 5i), so the denominator doesn’t factor over the
reals/reduce any further. This means there is no need to use partial fractions.
Instead, complete the square:
s2 + 6s + 34 = s2 + 6s + ( 26 )2 − 9 + 34 = (s + 3)2 + 25.
Notice that this is another way to see that it does not factor over the reals: (s + 3)2 + 25 ≥
25 > 0, so there is no real number s for which s2 + 6s + 34 = 0.
Now, first match the cosine term by getting s − a in the numerator. Then match
the sine term, by putting the rest in a separate term and leaving only k on top:
3s + 19 3(s + 3) + 10 s+3 10 5
X(s) = 2
= 2
=3 2 2
+ .
(s + 3) + 25 (s + 3) + 25 (s + 3) + 5 5 (s + 3)2 + 52
This last example shows that getting X(s) written in terms of the form L{cos(kt − a)}
and L{sin(kt − a)} can be a pain. Here is a codified recipe.
Method of solution
(2) Factor denominator into irreducible factors, using partial fractions as necessary.
c
(iii) Split into b · (cosine term) + k · (sine term).
4s + 19
Example 7.3.4. Suppose you needed to invert X(s) = . You still need to
(s + 3)2 + 8
make a s + 3 term on top to match the cosine term, so
4(s + 3) + 7
X(s) =
(s + 3)2 + 8
s+3 1
=4 +7
(s + 3)2 + 8 (s + 3)2 + 8
√
s+3 7 8 √ 2
=4 2
+√ 8= 8
(s + 3) + 8 8 (s + 3)2 + 8
√ √
x(t) = 4 cos 8t + √78 sin 8t.
The sine term only has a constant in the numerator, so you can always use it to “take up
the slack” like this. This is why you should take care of the cosine term first.
Example 7.3.5. Find the equation of motion for the oscillation of a mass, spring, dashpot
system with m = 12 , c = 3, and k = 17, started from x(0) = 0, x0 (0) = 0, and now acted
upon by the forcing function f (t) = 15 sin 2t.
Now partial fractions gives: 60 = (As + B)((s + 3)2 + 25) + (Cs + D)(s2 + 4) .
Use the following shortcut: substitute a root of either polynomial into this equation; it
will make one term vanish. For s = 2i, this equation reduces to
7.3.1 Resonance
Theorem 7.3.6. If F (s) exists for s > c, then L{tf (t)} = −F 0 (s) .
d n d n n!
L{tn eat } = (−1)n L{eat } = (−1)n ds (s − a)−1 =
ds (s − a)n+1
2 2
d d s s −k
L{t cos kt} = − ds L{cos kt} = − ds 2 2
= 2
s +k (s + k 2 )2
d d k 2ks
L{t sin kt} = − ds L{sin kt} = − ds 2 2
= 2 .
s +k (s + k 2 )2
These formulas tell you how to invert L when there are repeated factors in the denominator.
constant
We’ve seen that it is necessary to have something like (s2 +k2 )2 . To this end,
k s2 − k 2 k(s2 + k 2 ) k(s2 − k 2 ) 2k 3
L{sin kt − kt cos kt} = − k = − = ,
s2 + k 2 (s2 + k 2 )2 (s2 + k 2 )2 (s2 + k 2 )2 (s2 + k 2 )2
2k 3
so L{sin kt − kt cos kt} = .
(s2 + k 2 )2
Aω Aω
s2 X(s) + θ2 X(s) = =⇒ X(s) = .
s2 + ω 2 (s2 + ω 2 )(s2 + θ2 )
Aω 1 1
If ω 6= θ, then X(s) = ω 2 −θ 2 s2 +θ 2 − s2 +ω 2 , and
Aω 1 1
x(t) = 2 sin θt − sin ωt .
ω − θ2 θ ω
Aω
If ω = θ, then X(s) = (s2 +ω 2 )2 , and
Aω
x(t) = (sin ωt − ωt cos ωt) .
2ω 2
Resonance!
Example 7.3.9. y (4) + 2y 00 + y = 4tet , with y(0) = y 0 (0) = y 00 (0) = y 000 (0) = 0.
1
s4 Y (s) + 2s2 Y (s) + Y (s) = 4
(s − 1)2
1
(s4 + 2s2 + 1)Y (s) = 4
(s − 1)2
4 1 2 2s 2s + 1
Y (s) = 2 2 2
= 2
− + 2 2
+ 2
(s − 1) (s + 1) (s − 1) s − 1 (s + 1) s +1
y(t) = tet − 2et + t sin t + 2 cos t + sin t.
HW §7.3 #36–40
88 Laplace Transform Methods
(i) Commutativity: f ∗ g = g ∗ f .
(ii) Associativity: f ∗ (g ∗ h) = (f ∗ g) ∗ h.
(iii) Distributivity: f ∗ (g + h) = f ∗ g + f ∗ h.
However, f may not have a multiplicative inverse, that is, for a given f , you may not be
able to find a g such that f ∗ g = g ∗ f = δ.
There are also a couple of new properties.
(II) Magic (tensorial) property: if f, g are differentiable and f (0) = g(0) = 0, then
d d d
dt (f ∗ g) = ( dt f ) ∗ g = f ∗ ( dt g).
d
Effectively, IBP allows you to move dx from one factor of the integrand to the other. Guess
how you prove the magic property?
The magic property implies f 000 ∗ g = f 00 ∗ g 0 = f 0 ∗ g 00 = f ∗ g 000 , and so forth. It is a
smoothing operation: if f has m continuous derivatives and g has n continuous derivatives,
then f ∗ g has n + m continuous derivatives.
Also, if L is a linear differential operator, then linearity and the magic property imply
(3) Invert each “factor” separately to obtain f = L−1 {F (s)} and g = L−1 {G(s)}, to get
L−1 {F (s)} ∗ L−1 {G(s)} = f ∗ g.
HW §7.4 #2, 10, 11, 13, 14 and prove that f 0 ∗ g = f ∗ g0 when both are
differentiable, and f (0) = g(0) = 0.
90 Laplace Transform Methods
7.4.1 Distributions
1 d2 d
Lx = 30 sin 2t where L = + 3t2 + 17.
2 dx2 dx
for all “nice” functions ϕ. Note that x is not required to be a function here. Using the
inner product notation for functions, write this as
Definition 7.4.5. Let’s define a function ϕ to be nice if it is smooth (you can differentiate
it as many times as you like) and it is identically equal to 0 outside some bounded interval.
Denote the set of nice functions by D, so that ϕ ∈ D means f is nice.
(Trust me that it is continuous — that part takes a bit of work to show rigorously.)
δ is a distribution called the Dirac delta.
Note that if we treat δ as a function,
Z
ϕ(0) = hδ, ϕi = δ(t)ϕ(t) dt.
R
Remember: for distributions that are not functions, it may not make sense to ask what
the value is at a particular point t.
92 Laplace Transform Methods
Example 7.4.10. Consider the unit step function (or Heaviside function)
1, t ≥ 0,
u(t) :=
0,
t < 0.
hu0 , ϕi = −hu, ϕ0 i
Z
= − u(t)ϕ0 (t) dt
ZR∞
=− ϕ0 (t) dt
0
∞
= − [ϕ(t)]0
= − (0 − ϕ(0))
= ϕ(0)
= hδ, ϕi.
This shows that u0 = δ, in the weak sense, i.e., that δ is the distributional derivative of u.
Theorem 7.4.11. L{δ} = 1 and L{δa } = e−as , where δa (t) := δ(t − a).
Theorem 7.4.12. The Dirac delta is the multiplicative identity for the operation ∗. In
other words, δ ∗ f = f , for any function f : R → R.
Definition 7.4.13. The fundamental solution for a linear operator L is the distribution ζ
(which may be a function) that satisfies
Lζ = δ.
7.4 Convolutions and distributions 93
=f ∗δ ζ is a fund. soln.
=f δ is identity for ∗.
HW
t, t ≥ 0,
(1) Let f (t) = t+ := Show that u is the distributional derivative of t+ .
0, t ≤ 0.
∞
X f (k) (0)
f (t) = ck tk , ck = .
k!
k=0
Think of this in the following way: f can be decomposed into a linear combination of
{1, t, t2 , t3 , . . . }, where the coefficients are given in terms of the derivatives of f .
Here, the functions {tk }k∈N play the role of a basis of a vector space (the vector space
is C ∞ (−r, r)), but it turns out that there are other bases which are better suited to solving
ODEs and PDEs, etc. It would be nice to have an orthonormal basis, so that the basis
vectors are completely independent.
Pn
Definition 7.4.14. The inner product of u, v ∈ Rn is u · v = i=1 ui vi and gives the size
of the component of v parallel to u.
Theorem 7.4.16. If {ek }nk=1 is an ONB, then one can decompose v in terms of {ek } via
n
X
v= (v · ek )ek .
k=1
For vector spaces comprised of functions, we need to generalize the definition a bit:
X
v= hv, ek iek .
k∈Z
e2πikt 1
ek := √ = √ (cos 2πkt + i sin 2πkt),
2 2
This procedure takes a function f : I → R and converts it into a sequence {ck }k∈Z .
f(t)
{ck}k ÎZ = {áf,ekñ}kÎ Z
t
Instead of transmitting the entire signal, you can just transmit the sequence {ck }k∈Z .
⇒ ⇒ This is digitization. ⇐ ⇐
P
This is also the basis for a type of compression: since k∈Z hf, ek iek converges, you
know khf, ek iek k = |hf, ek i| = |ck | → 0. After some point, adding more terms will only
improve the approximation by an amount too small to detect. In MP3 compression, maybe
CD-quality corresponds to {ck }256 128
k=−256 , while radio-quality is maybe only {ck }k=−128 .
There is nothing sacred about (−1, 1), so Fourier theory says: any function on a bounded
interval (a, b) (equivalently, any periodic function) can be approximated arbitrarily well by
its Fourier series.
What about f on an unbounded interval? In this case, {ck }k∈Z is no longer enough, so
Z
fˆ(ξ) := hf, eξ i2 = f (t)e−2πiξt dt, eξ (t) := e2πitξ .
R
R
Note: if k(t, ξ) is a function of t and ξ, then f (t)k(t, ξ) dt is a function of ξ.
So f can be recovered by inverting the Fourier transform. Also, the Fourier transform of f
can be recovered as the limit of the Fourier series of f , as the size of the interval grows.
Suppose f (t) = 0 outside (a, b). Then f (t) = 0 outside (− T2 , T2 ) for large enough T , and
Z T /2
1 k
ck = f (t)e−2πi T t dt = fˆ(k), ξ = k,
T −T /2
∞
(−st)n
Z Z X
e−st f (t) dt = f (t) dt
n=0
n!
∞
(−1)n sn
X Z
= tn f (t) dt
n=0
n!
∞
(−1)n (n)
X
= Mf (0) sn ,
n=0
n!
Z Z
(n)
where Mf (t) = etx f (x) dx is the moment-generating function of f and Mf (0) = tn f (t) dt
HW
(1) Prove that { √12 e2πikt }k∈Z is an orthonormal set:
e2πint e2πimt 1, n = m,
√ , √ =
2 2 2 0, n 6= m.
R1
Hint: first prove that −1 e2πint dt = 2 when n = 0, and equals 0 otherwise. Then
R1
show that −1 e2πint e2πimt dt = 2 when n = m, and equals 0 otherwise.
Z ∞ Z b
(2) If s ∈ R is a fixed constant, compute the improper integral e−st dt = lim e−st dt.
0 b→∞ 0
For what values of s does the integral exist? For what values of s does it not exist?
7.4 Convolutions and distributions 97
Z ∞
(3) The Gamma function is defined Γ(x) = e−t tx−1 dt. (i) Use IBP to show that
0
Γ(x + 1) = xΓ(x). (ii) Show Γ(1) = 1 (hint: kettle with s = 1). (iii) Explain why (i)
& (ii) imply Γ(n + 1) = n! for any positive integer n.
Z ∞
(4) Let a be some fixed constant in (−1, ∞). Compute the improper integral e−st ta dt.
0
(Consider letting u = st, and write your answer in terms of Γ.)