Sei sulla pagina 1di 35

Mathematics MATH6502

(First Term)
James Burnett
Autumn 2010
Contents
4 Eigenvectors and Eigenvalues 35
4.1 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.1 Background: polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.2 Motivation: Google . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.3 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.4 Commuting matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.5 Symmetric matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.6 Power method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5 Laplace Transforms 48
5.0.7 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.0.8 Preparation for Laplace transforms: Improper integrals . . . . . . . . . . . . . . . 48
5.0.9 Denitions: Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.0.10 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.0.11 Common functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.0.12 More complicated functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.0.13 Discontinuous function example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.0.14 Inverting the Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.0.15 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.0.16 Systems of linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 Series 59
6.1 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7 Revision 64
7.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2 Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.3 Summary of Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.4 Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.5 Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.6 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
[Reading week]
2
Chapter 4
Eigenvectors and Eigenvalues
4.1 Eigenvalues and eigenvectors
4.1.1 Background: polynomials
An nth-order polynomial is a function of the form
f(x) = c
0
+c
1
x +c
2
x
2
+ +c
n
x
n
with c
n
= 0.
If f() = 0 then we say x = is a root of the equation f(x) = 0, and f(x) can be factored: f(x) =
(x )q(x).
Every polynomial of order n (i.e. with highest power x
n
) can be broken down into n factors:
c
0
+c
1
x +c
2
x
2
+ +c
n
x
n
= c
n
(x
1
)(x
2
)(x
3
) (x
n
)
although the numbers
1
,
2
etc. dont have to be all dierent and may be complex.
If all the coecients c
0
, c
1
, . . . c
n
are real, then any complex roots appear in complex conjugate pairs:
a +ib and a ib.
There are two things we need be able to do to nd all the roots of a polynomial:
Find a root ; and
Given one root, nd the new (smaller) polynomial q(x).
Finding roots
To nd roots of the equation f(x) = 0:
Guess easy values: 0, 1, 1, 2, 2 and so on.
If f(a) = 0 then a is a root
If f(a) > 0 and f(b) < 0 then there is a root between a and b:
6
-
a
q
b
q
Guess factors of c
0
(e.g. if c
0
= 6, try 1, 2, 3, 6).
35
Quadratics
You probably know the formula for quadratics, but there are nicer (and quicker) methods.
An easy one:
x
2
+ 6x + 9 = (x + 3)
2
Now it follows that
x
2
+ 6x + 13 = (x + 3)
2
+ 4
so to solve x
2
+ 6x + 13 = 0 we need
(x+3)
2
+4 = 0 (x+3)
2
= 4 x+3 = 2i x = 32i so x
2
+6x+13 = (x+32i)(x+3+2i).
Similarly,
x
2
+6x +8 = (x +3)
2
1 x
2
+6x +8 = 0 = (x +3)
2
= 1 x +3 = 1 x = 2 or 4.
This procedure:
x
2
+ 2nx +c (x +n)
2
+c n
2
is called completing the square.
If the numbers work out, there is an even quicker method. Look at the general factored quadratic
(without a constant at the front):
(x a)(x b) = x
2
(a +b) +ab
Now, looking at a specic case, e.g.
x
2
+ 4x + 3 = 0
we want to match the two: so we need
ab = 3
a +b = 4
The rst one immediately suggests either 1 and 3 or 1 and 3; the second one tells us that 1 and 3
are the roots so
x
2
+ 4x + 3 = (x + 1)(x + 3).
Another example:
x
2
x 12 = 0.
Now the factors may be 1 and 12, 2 and 6, or 3 and 4 (with a minus sign somewhere). The sum (which
needs to be the dierence because of the minus sign) should be 1 so we quickly reach
Roots x = 4 and x = 3, x
2
x 12 = (x 4)(x + 3).
Factorising polynomials
The long division method you learnt years ago for dividing large numbers works in a very similar way
for polynomials.
Example: f(x) = x
3
+ 1. We spot a root x = 1 which means there is a factor of (x + 1), so
x
3
+ 1 = (x + 1)q(x).
x
2
x + 1
x + 1 ) x
3
+ 0x
2
+ 0x + 1
x
3
+ x
2
x
2
+ 0x
x
2
x
x + 1
At each stage we choose a multiple of our (x+1) factor that matches the highest power in the remainder
we are trying to get rid of; we write that multipying factor above the line; we write the multiple below;
then we subtract to get the next remainder. In this case q(x) = x
2
x + 1 which has complex roots.
36
Example: f(x) = x
3
+ 10x
2
+ 31x + 30. This time we dont immediately spot a factor, so try a few
values. Note rst that if x > 0 all the terms are positive so f(x) > 0; so all the roots must be negative.
x = 0 : f(0) = 30
x = 1 : f(1) = 1 + 10 31 + 30 = 8
x = 2 : f(2) = 8 + 40 62 + 30 = 0
so we can write f(x) = (x + 2)q(x).
x
2
+ 8x + 15
x + 2 ) x
3
+ 10x
2
+ 31x + 30
x
3
+ 2x
2
8x
2
+ 31x
8x
2
+ 16x
15x + 30
In this case we have q(x) = x
2
+ 8x + 15 = (x + 3)(x + 5) so f(x) = (x + 2)(x + 3)(x + 5).
There is another method; simply write down the most general form for q(x) and then work out the
coecients of each power of x.
Example: f(x) = x
3
+x
2
+x + 1. Spot a root x = 1. Then write
f(x) = (x + 1)(ax
2
+bx +c) =
ax
3
+ bx
2
+ cx
+ ax
2
+ bx + c
so we have these equations for the coecients of the powers of x:
x
3
: a = 1
x
2
: a +b = 1 = b = 0
x : b +c = 1 = c = 1
1 : c = 1 which agrees.
Thus q(x) = x
2
+ 1 and f(x) = (x + 1)(x +i)(x i).
4.1.2 Motivation: Google
How do Google rank the many pages they know about containing the words you search for? Lets look
at a model miniature web network:
[diagram]
What we want to do, is start at a random page on the web and keep clicking random links. After a long
time, we can look to see how much time was spent at each page. This gives the Google PageRank.
In terms of matrices, if we start 5 browsers on the 5 pages, we can write that as a vector:
v
1
=
_
_
_
_
_
_
1
1
1
1
1
_
_
_
_
_
_
37
Then after one step, the brower from a has 1/4 chance of being on each of the other pages; the
browser from b has gone to c; and so on:
v
2
= Av
1
=
_
_
_
_
_
_
_
_
_
_
0 0
1
2
0 0
1
4
0 0 0 0
1
4
1 0 1 1
1
4
0
1
2
0 0
1
4
0 0 0 0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
1
1
1
1
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
1
2
1
4
3
1
4
3
4
1
4
_
_
_
_
_
_
_
_
_
_
.
The same process tells us how many browsers will be where after 2 clicks, 3 clicks, and so on.
After many clicks, we look to see how many browsers (on average) are on each webpage; this
determines how Google ranks them.
Look at how the numbers behave in this case:
0
0.5
1
1.5
2
2.5
3
3.5
0 5 10 15 20 25
The top line represents page c; after many clicks the average number of browsers visiting page c tends
to 2.1052. The vector v
n
representing how many browsers are at each page tends to the constant value:
v
n

_
_
_
_
_
_
1.0526
0.2632
2.1052
1.3158
0.2632
_
_
_
_
_
_
(The reason you can only see 4 lines is that states b and e do the same as each other; they are both the
bottom curve).
This special vector v that we end up with doesnt change when we multiply it by A; so
Av = v
This is an eigenvector of the matrix A.
4.1.3 Denitions
For a square matrix A, if
Av = v
with v = 0 then v is an eigenvector of A with eigenvalue .
38
When is this possible?
Av = I v (A I) v = 0
This is just an ordinary homogeneous linear system. Remember, if the determinant of the matrix on the
left is non-zero then there is a unique solution v = 0; for a nontrivial solution we need
det (A I) = 0.
Suppose this determinant is zero; then our equation will have an innite number of solutions, v and any
multiple of it.
The determinant det(AI) is a polynomial in of degree n, so there are at most n dierent eigenvalues.
A useful fact (not proved here) is det(A) =
1

2

n
.
Example
A =
_
5 2
9 6
_
.
|A I| =

5 2
9 6

= (5 )(6 ) (2)(9)
= (30 + +
2
) + 18 =
2
+ 12 = ( + 4)( 3)
so the matrix has eigenvalues
1
= 3,
2
= 4.

1
= 3:
(A
1
I)v
1
= 0 and v
1
=
_
a
b
_
=
_
2 2
9 9
__
a
b
_
=
_
0
0
_
.
a b = 0 a = b v
1
=
_
1
1
_
(or any multiple of this).
Check:
Av
1
=
_
5 2
9 6
__
1
1
_
=
_
3
3
_
= 3
_
1
1
_
=
1
v
1
.

2
= 4:
(A
2
I)v
2
= 0 and v
2
=
_
a
b
_
=
_
9 2
9 2
__
a
b
_
=
_
0
0
_
.
9a 2b = 0 9a = 2b v
2
=
_
2
9
_
(or any multiple of this).
Check:
Av
2
=
_
5 2
9 6
__
2
9
_
=
_
8
36
_
= 4
_
2
9
_
=
2
v
2
.
Another example
A =
_
1 1
2 1
_
Again, we look for eigenvalues using the determinant:
|A I| =

1 1
2 1

= (1 )(1 ) + 2 =
2
+ 1
so the two roots of this equation are = i.
39
Eigenvector and eigenvalue properties
Eigenvalue and eigenvector pair satisfy
Av = v and v = 0.
is allowed to be zero
is allowed to be complex: but if a + ib is an eigenvalue so is a ib, and the eigenvectors
corresponding to these eigenvalues will also be a complex conjugate pair (as long as A is real)
The same eigenvalue may appear more than once.
Eigenvectors corresponding to dierent eigenvalues are linearly independent (and explain).
Example
A =
_
_
0 1 3
2 3 3
2 1 1
_
_
First we need to solve the determinant equation:
det (A I) = 0
0 =

1 3
2 3 3
2 1 1

= ()

3 3
1 1

(1)

2 3
2 1

+ (3)

2 3
2 1

= (){(3 )(1 ) 3} +{2(1 ) + 6} 3{2 + 2(3 )}


= (){3 4 +
2
3} +{2 2 + 6} {6 + 18 6}
= (1){4
2
+
3
} +{8 2} {6 + 18 6}
=
3
+ 4
2
+ 4 16 = ( 2)(
2
2 8) = ( 2)( + 2)( 4)
This is zero if = 2, = 2 or = 4 so these are the three eigenvalues.
Now for each eigenvector in turn, we solve the equation (A I)v = 0.
First eigenvector:
1
= 2.
(A 2I)v
1
= 0
_
_
2 1 3
2 1 3
2 1 1
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
We carry out a single Gaussian elimination step here (r
2
r
2
+r
1
and r
3
r
3
r
1
):
_
_
2 1 3
0 0 0
0 2 2
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
Now we can see that there is a zero row (which we expected because we made the determinant zero)
and two equations to solve by back substitution. When we circle the leading term in each row there is
no circle in the third column so we rename c = . Then r
3
gives
r
3
= 2b + 2c = 0 b =
r
1
= 2a b 3c = 0 a =
and so the general eigenvector is
v
1
=
_
_

_
_
and we can use any multiple of it: v
1
=
_
_
1
1
1
_
_
.
40
Second eigenvector:
2
= 2
(A + 2I)v
2
= 0
_
_
2 1 3
2 5 3
2 1 3
_
_
_
_
d
e
f
_
_
=
_
_
0
0
0
_
_
Again, we eliminate: r
2
r
2
r
1
and r
3
r
3
+r
1
:
_
_
2 1 3
0 6 6
0 0 0
_
_
_
_
d
e
f
_
_
=
_
_
0
0
0
_
_
Just like last time, when we circle the leading elements in the rows there is no circle in column 3 so we
can set f = and back substitute:
r
2
= 6e + 6f = 0 e =
r
1
= 2d e 3f = 0 d =
which gives us the solution
_
_
d
e
f
_
_
=
_
_

_
_
v
2
=
_
_
1
1
1
_
_
.
Third eigenvector:
3
= 4
(A 4I)v
3
= 0
_
_
4 1 3
2 1 3
2 1 3
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
This time it makes sense to use non-standard Gaussian elimination: r
3
r
3
+r
2
followed by r
2
2r
2
+r
1
.
_
_
4 1 3
0 3 3
0 0 0
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
and, again, we can set c =
r
2
= 3b + 3c = 0 b =
r
1
= 4a b 3c = 0 a =
_
_
a
b
c
_
_
=
_
_

_
_
v
3
=
_
_
1
1
1
_
_
.
Examples with repeated eigenvalues
A =
_
_
2 0 0
0 2 0
0 0 1
_
_
Since this is a diagonal matrix, its eigenvalues are just the values on the diagonal (check with the full
determinant if youre not convinced). In this case we have = 1, = 2 and = 2 again. The eigenvector
for = 1 is perfectly normal:
(A I)v
1
= 0
_
_
1 0 0
0 1 0
0 0 0
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
v
1
=
_
_
0
0
1
_
_
.
41
When we come to solve with = 2 though, the eigenvector is not uniquely dened:
(A 2I)v
2
= 0
_
_
0 0 0
0 0 0
0 0 1
_
_
_
_
d
e
f
_
_
=
_
_
0
0
0
_
_
which only tells us that f = 0 and doesnt x d and e. In fact in this case we can choose any two dierent
pairs d and e as our two eigenvectors for the repeated eigenvalue = 2,
e.g. v
2
=
_
_
1
0
0
_
_
, v
3
=
_
_
0
1
0
_
_
or v
2
=
_
_
1
1
0
_
_
, v
3
=
_
_
1
1
0
_
_
and in fact any linear combination v
2
+ v
3
will also be an eigenvector of A with eigenvalue 2.
This doesnt always work, though. Here is another example:
A =
_
_
2 1 0
0 2 0
0 0 1
_
_
.
Just like last time, the eigenvalues are
1
= 1,
2
=
3
= 2.
Look for the eigenvector of
1
= 1:
(A
1
I)v
1
=
_
_
1 1 0
0 1 0
0 0 0
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
v
1
=
_
_
0
0
1
_
_
.
Now the eigenvector(s) of
2
= 2:
(A
2
I)v
2
=
_
_
0 1 0
0 0 0
0 0 1
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
.
Again, there is just one solution:
v
2
=
_
_
1
0
0
_
_
.
In this case there are only two eigenvectors.
_

_
Non-examinable:
In this situation, where a repeated eigenvalue has only one eigenvector v, we can nd further linearly
independent generalised eigenvectors by considering, for example,
(A I)
2
w = 0.
_

_
Review of eigenvalues and eigenvectors
Eigenvalue and eigenvector pair satisfy
Av = v and v = 0.
We nd the eigenvalues by solving the polynomial det (A I) = 0 and then nd each eigenvector
by solving the linear system (A I)v = 0.
is allowed to be zero or complex.
The same eigenvalue may appear more than once; if it does, we may have a choice of eigenvectors
or a missing one.
The product of the eigenvalues is the determinant
The sum of the eigenvalues is the trace of the matrix; the sum of the elements on the leading
diagonal.
42
4.1.4 Commuting matrices
A pair of matrices A and B are said to commute if AB = BA. If two n n matrices commute, and
they both have n distinct eigenvalues, then they have the same eigenvectors.
Proof
Look at an eigenvector of A. We know that Av = v. Now let u = Bv. Then
Au = ABv = BAv = Bv = B v = u
Weve shown
Au = u
which means u is an eigenvector of A with eigenvalue . Since the eigenvector of A corresponding to
was v, it follows that u must be a multiple of v:
u = v Bv = v
so v is an eigenvector of B with eigenvalue .
Example
A =
_
3 2
1 0
_
B =
_
1 2
1 4
_
These commute:
AB =
_
3 2
1 0
__
1 2
1 4
_
=
_
5 2
1 2
_
BA =
_
1 2
1 4
__
3 2
1 0
_
=
_
5 2
1 2
_
Lets look at A rst. The eigenvalues:
det (A I) =

3 2
1

= (3 )() + 2 =
2
3 + 2 = ( 1)( 2).
The eigenvalues are
1
= 1 and
2
= 2.
First eigenvector:
1
= 1.
(A I)v
1
=
_
2 2
1 1
__
a
b
_
=
_
0
0
_
= a +b = 0 v
1
=
_
1
1
_
.
Second eigenvector:
2
= 2.
(A 2I)v
1
=
_
1 2
1 2
__
c
d
_
=
_
0
0
_
= c + 2d = 0 v
2
=
_
2
1
_
.
Now we see what eect B has on these eigenvectors:
Bv
1
=
_
1 2
1 4
__
1
1
_
=
_
3
3
_
= 3v
1
Bv
2
=
_
1 2
1 4
__
2
1
_
=
_
4
2
_
= 2v
2
.
Note
Because A commutes with itself, it follows that A and A
2
(and all other powers of A) have the same
eigenvectors.
43
4.1.5 Symmetric matrices
If our real matrix A is symmetric then all its eigenvalues are real.
The eigenvectors corresponding to any two dierent eigenvalues will be orthogonal, i.e. v
i
v
j
= 0 (or
v

i
v
j
= 0) if
i
=
j
.
For equal eigenvalues, there will still be a complete set of eigenvectors, and we can choose the constants
in the eigenvectors for the repeated eigenvalue so that all the eigenvectors are orthogonal.
Example
A =
_
_
0 1 1
1 0 1
1 1 0
_
_
First we look for the eigenvalues:
det (A I) =

1 1
1 1
1 1

= ()

1
1

(1)

1 1
1

+ (1)

1
1 1

= (){
2
1} { 1} +{1 +} =
3
+ + + 1 + 1 + =
3
+ 3 + 2
We can spot the root = 1, and factorise:

3
+ 3 + 2 = (
3
3 2) = ( + 1)(
2
2) = ( + 1)( + 1)( 2)
so we have the eigenvalue = 1 twice, and the eigenvalue = 2.
Well nd the straightforward one rst:
(A 2I)v
1
=
_
_
2 1 1
1 2 1
1 1 2
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
Gaussian elimination: r
2
2r
2
+r
1
; r
3
2r
3
+r
1
:
_
_
2 1 1
0 3 3
0 3 3
_
_
_
_
a
b
c
_
_
=
_
_
0
0
0
_
_
Now the last row is clearly a multiple of r
2
so we can set c = and back-substitute to get b = and
a = .
v
1
=
_
_
1
1
1
_
_
.
Now onto the repeated root:
(A +I)v
2
=
_
_
1 1 1
1 1 1
1 1 1
_
_
_
_
d
e
f
_
_
=
_
_
0
0
0
_
_
All three rows are the same so Gaussian elimination would just leave us with the equation d+e +f = 0.
Note that even at this stage we can see any vector we choose will be orthogonal to v
1
:
v

1
v
2
=
_
1 1 1
_
_
_
d
e
f
_
_
= d +e +f = 0.
Now we want to choose two vectors satisfying d +e +f = 0 which are orthogonal to one another.
44
Choosing the rst one is easy: you have free choice. Keep it as simple as possible; use zeros where you
can. The obvious choice is probably
v
2
=
_
_
1
0
1
_
_
.
Now for the second, we need to choose a new vector that satises two things:
v
3
=
_
_
d

_
_
d

+e

+f

= 0 v

2
v
3
= 0 =
_
1 0 1
_
_
_
d

_
_
= d

= 0.
Now we have two linear equations in d

, e

and f

:
_
_
1 1 1
1 0 1
0 0 0
_
_
_
_
d

_
_
=
_
_
0
0
0
_
_
and we can solve this system by Gaussian elimination and back substitution in the usual way. r
2
r
2
r
1
:
_
_
1 1 1
0 1 2
0 0 0
_
_
_
_
d

_
_
=
_
_
0
0
0
_
_
f

=
e

2f

= 0, e

= 2
d

+e

+f

= 0, d

=
v
3
=
_
_
1
2
1
_
_
.
4.1.6 Power method
Iterative methods
Remember Newtons method for nding a root of the equation f(x) = 0:
x
n+1
= x
n

f(x
n
)
f

(x
n
)
.
This is an iterative method: we start with an initial guess x
0
and keep applying the rule until we get
a root. Although it isnt guaranteed to work, if we start close enough to a root we will end up there.
Why do we need iterative methods?
We usually nd eigenvalues by solving the characteristic equation of our n n matrix A:
det (A I) = 0.
When n is large, this is a huge polynomial and it isnt easy to nd the roots. If we really need all the
roots, then using something like Newtons method may be the best way; but very often in engineering
problems all we need is the largest eigenvalue (e.g. the dominant mode of vibration will produce the
largest amplitudes).
Using eigenvectors as a basis
If we have a set of n linearly independent vectors in n-dimensional space then we can say that they form
a basis of the space, and we can write any n-dimensional vector z can be written as a linear combination
of them:
z =
1
v
1
+
2
v
2
+ +
n
v
n
for some constants
1
,
2
, . . .
n
.
In our case, if our matrix has all real eigenvalues and a complete set of eigenvectors (as, for example, if
it is symmetric), then we can use the eigenvectors of our matrix as the basis, so that we know how the
vectors change when we multiply by A:
Az =
1

1
v
1
+
2

2
v
2
+ +
n

n
v
n
.
45
A
2
z =
1

2
1
v
1
+
2

2
2
v
2
+ +
n

2
n
v
n
.
We need just one more assumption to make our method work: the eigenvalue with the largest magnitude
must be strictly larger than all the others: |
1
| > |
2
| |
3
| |
4
| .
In this case, after many multiplications the rst term will be the largest:
A
k
z
1

k
1
v
1
.
Denition of the Power method
The basic method (given on the left) is: choose an initial guess for the eigenvector and call it z
0
. Then
iteratively carry out this process:
_

_
for k = 1, 2, . . .
find z
k
= Az
k1
= A
k
z
0
form c
k
= z

0
z
k
form
k
=
c
k
c
k1
(not for k = 1)

k

1
next k
_

_
for k = 1, 2, . . .
find y
k
= Az
k1
let
k
= maximum element of y
k
form z
k
=
1

k
y
k

k

1
and z
k
v
1
next k
In practice it is best to normalise the vector A
k
z
0
so that it has 1 as its largest element (see the method
on the right, above). This process of normalisation helps prevent errors from growing. We can also
extract an estimate of the eigenvector at the end. This is the essence of the PageRank algorithm
(except that the sort of matrix we used there always has largest eigenvalue 1 so (a) we dont need to
renormalise and (b) we want the eigenvector, not the eigenvalue).
Example
Lets look at a nice, small 2 2 matrix and use as our starting guess a simple vector with only one
non-zero element:
A =
_
4 1
2 3
_
z
0
=
_
1
0
_
Well use the normalising method, and work to 2 decimal places.
k = 1
y
1
=
_
4 1
2 3
__
1
0
_
=
_
4
2
_

1
= 4 z
1
=
_
1
0.5
_
k = 2
y
2
=
_
4 1
2 3
__
1
0.5
_
=
_
4.5
3.5
_

2
= 4.5 z
2
=
_
1
0.78
_
k = 3
y
3
=
_
4 1
2 3
__
1
0.78
_
=
_
4.78
4.33
_

3
= 4.78 z
3
=
_
1
0.91
_
k = 4
y
4
=
_
4 1
2 3
__
1
0.91
_
=
_
4.91
4.72
_

4
= 4.91 z
4
=
_
1
0.96
_
k = 5
y
5
=
_
4 1
2 3
__
1
0.96
_
=
_
4.96
4.88
_

4
= 4.96 z
4
=
_
1
0.98
_
k = 6
y
6
=
_
4 1
2 3
__
1
0.98
_
=
_
4.98
4.95
_

4
= 4.98 z
4
=
_
1
0.99
_
and we can see that
k
5 and z
k

_
1
1
_
as the process continues.
46
Further examples if needed
A =
_
3 2
1 0
_
B =
_
10 18
6 11
_
C =
_
1 2
2 5
_
47
Chapter 5
Laplace Transforms
5.0.7 Introduction
The Laplace transform, like the Fourier series, gives us a method of tackling dierential equation problems
we couldnt otherwise solve.
Dierential
equation
(HARD)
-
?
Solution of
dierential
equation
Laplace
transform
?
Algebraic
equation
(EASY)
-
Solve
Solution of
easier
equation
6
Inverse
Laplace
transform
5.0.8 Preparation for Laplace transforms: Improper integrals
Let us look at this integral:
I =
_

0
e
st
dt
It has innity as its top limit, which means it is called an improper integral. Strictly what we mean
by this is
I = lim
M
_
M
0
e
st
dt = lim
M
_

e
st
s
_
M
t=0
= lim
M
_

e
sM
s
+
1
s
_
=
1
s
lim
M
_
e
sM
s
_
Now as M , e
sM
0 as long as s > 0. If s = 0 the integral is not dened because of the 1/s (and
in fact were integrating 1 from 0 to which is innite); if s < 0 then e
sM
grows as M so I does
not converge.
48
So, provided s > 0,
I =
1
s
.
Now if we let s vary then I is a function of s.
5.0.9 Denitions: Laplace transform
The Laplace transform of a function f(t) dened on 0 < t < is given by
L{f(t)} =
_

0
e
st
f(t) dt = F(s)
We take a known function f of the original variable t and create the Laplace tranform L{f(t)}
which is a new function F(s) of the transform variable s.
Notation
L{w(t)} = W(s) L{x(t)} = X(s)
As we saw in the very simple example above, there is often a restriction on s (usually s > 0 but sometimes
even larger) so that the integral exists.
Before we start nding our rst few Laplace transforms, we will devise a set of rules, based on the
denition, which will help us to nd the Laplace transform of complicated functions.
5.0.10 Rules
Throughout these rules, we will assume that we know one Laplace transform already:
L{f(t)} = F(s) =
_

0
e
st
f(t) dt.
Rule 1: Multiplying by a constant
If we multiply our function by a constant, a:
L{af(t)} =
_

0
e
st
af(t) dt = a
_

0
e
st
f(t) dt = aF(s)
Rule 2: Adding functions together
If we have two answers already:
L{f
1
(t)} = F
1
(s) =
_

0
e
st
f
1
(t) dt and L{f
2
(t)} = F
2
(s) =
_

0
e
st
f
2
(t) dt
then:
L{af
1
(t) +bf
2
(t)} =
_

0
e
st
{af
1
(t) +bf
2
(t)} dt = a
_

0
e
st
f
1
(t) dt +b
_

0
e
st
f
2
(t) dt
L{af
1
(t) +bf
2
(t)} = aF
1
(s) +bF
2
(s)
Rule 3: Multiplying by e
at
We will leave the proof of this one for homework:
L{e
at
f(t)} = F(s a).
49
Rule 4: Dierentiation
Well use the notation
f

(t) =
df
dt
What is the Laplace transform of the derivative?
L{f

(t)} =
_

0
e
st
df
dt
dt
We can integrate by parts
u = e
st
du
dt
= se
st
dv
dt
=
df
dt
v = f(t)
L{f

(t)} = [e
st
f(t)]

t=0

_

0
se
st
f(t) dt = {0 f(0)} +s
_

0
e
st
f(t) dt
L{f

(t)} = sF(s) f(0)


Rule 5: second derivative
Again, the proof is left for homework:
L{f

(t)} = s
2
F(s) sf(0) f

(0)
Rule 6: Multiply by t
L{tf(t)} =
_

0
e
st
(tf(t)) dt =
_

0
d
ds
_
e
st
f(t)
_
dt =
d
ds
__

0
e
st
f(t) dt
_
L{tf(t)} =
d
ds
F(s)
Rule 7: Multiply by t
2
Again, the proof is left for homework:
L{t
2
f(t)} =
d
2
F(s)
ds
2
.
5.0.11 Common functions
Result 1 f(t) = 1. Weve seen this one, when we looked at improper integrals:
L{1} =
_

0
e
st
1 dt =
1
s
Result 2 f(t) = t. We will carry out the improper integral carefully:
L{t} =
_

0
e
st
t dt = lim
M
_
M
0
te
st
dt
We integrate by parts:
u = t
du/dt = 1
dv/dt = e
st
v = e
st
/s
50
L{t} = lim
M
_
_

te
st
s
_
M
t=0

_
M
0
e
st
s
dt
_
= lim
M
_
0
Me
sM
s
+
1
s
_
M
0
e
st
dt
_
= lim
M
_

Me
sM
s
_
. .
0: since s > 0, e
sM
0
faster than M
+
1
s
lim
M
_
_
M
0
e
st
dt
_
. .
1/s from previous result
L{t} =
1
s
2
Result 3 f(t) = t
n
. In this case we will state the result and then prove it by induction.
L{t
n
} =
n!
s
n+1
First we check the base case n = 1:
f(t) = t F(s) =
1
s
2
1!
s
1+1
=
1
s
2
Now we assume it is true for n k and try to prove it for n = k + 1. If we can, then were done.
f(t) = t
k+1
F(s) =
_

0
e
st
t
k+1
dt
Now we can use the multiply by t rule and the multiply by a constant rule to say
L{t
k+1
} = L{(t)(t
k
)} =
d
ds
L{(t
k
)} =
d
ds
L{t
k
}
and then use our assumption (for n = k) to give
L{t
k+1
} =
d
ds
_
k!
s
k+1
_
=
d
ds
(k!s
(k+1)
) = (k![(k + 1)]s
(k+1)1
) = (k + 1)k!s
(k+2)
L{t
k+1
} =
(k + 1)!
s
(k+1)+1
which is the right form for n = k + 1 so the form is true for all n 1.
Result 4 f(t) = e
at
with a a constant.
F(s) = L{e
at
} =
_

0
e
st
e
at
dt
=
_

0
e
(sa)t
dt = lim
M
_
1
(s a)
e
(sa)t
_
M
t=0
=
1
(s a)
(0 1) =
1
(s a)
L{e
at
} =
1
(s a)
.
Note in this case we need s > a for the integral to converge.
We could have proved this more easily using Rule 3 and Result 1:
L{e
at
f(t)} = F(s a) and L{1} =
1
s
so L{e
at
1} =
1
(s a)
.
51
Results 5 & 6 sin (at) and cos (at).
There are two ways to do these: a laborious but straightforward way, and a much easier way that
needs you to remember complex numbers. First the laborious way (well do cos).
F(s) = L{cos (at)} =
_

0
e
st
cos (at) dt
By parts:
u = cos (at)
du/dt = a sin (at)
dv/dt = e
st
v = e
st
/s
F(s) = [cos (at)e
st
/s]

t=0

_

0
a sin(at)(e
st
/s) dt
=
1
s

a
s
_

0
sin(at)e
st
dt
By parts again:
u = sin (at)
du/dt = a cos (at)
dv/dt = e
st
v = e
st
/s
F(s) =
1
s

a
s
_
[sin (at)(e
st
/s)]

t=0

_

0
a cos (at)(e
st
/s) dt
_
=
1
s

a
s
_

_
a
s
_

0
cos (at)e
st
dt
. .
F(s)
_

_
F(s) =
1
s

a
2
s
2
F(s) F(s) = L{cos (at)} =
s
(s
2
+a
2
)
.
In the same way, we can derive
L{sin (at)} =
a
(s
2
+a
2
)
.
However, if we are prepared to use complex numbers we can get both at once!. Recall the key
facts:
e
i
= cos +i sin cos = (e
i
) sin = (e
i
).
It follows (if we use our rules for adding functions and multiplying by a constant, and the obser-
vation that the Laplace transform of a real function is real) that
L{cos (at)} = L{e
iat
} and L{sin (at)} = L{e
iat
}
We already know that
L{e
at
} =
1
(s a)
so L{e
iat
} =
1
(s ia)
.
Now we just need to manipulate this to get real and imaginary parts:
L{cos (at)} +iL{sin(at)} = L{e
iat
} =
1
(s ia)
=
(s +ia)
(s ia)(s +ia)
=
(s +ia)
(s
2
+a
2
)
=
s
(s
2
+a
2
)
+i
a
(s
2
+a
2
)
L{cos (at)} =
s
(s
2
+a
2
)
L{sin (at)} =
a
(s
2
+a
2
)
52
Results 7 & 8 sinh (at) and cosh(at)
Here we just work straight from the denitions:
sinh (at) =
e
at
e
at
2
L{sinh(at)} = L
_
e
at
e
at
2
_
=
1
2
L{e
at
}
1
2
L{e
at
} =
1
2
1
(s a)

1
2
1
(s + a)
=
1
2
_
(s +a)
(s a)(s +a)

(s a)
(s a)(s +a)
_
=
a
(s
2
a
2
)
In the same way,
L{cosh(at)} =
s
(s
2
a
2
)
.
5.0.12 More complicated functions
To create the Laplace transform of functions we havent seen before, we can either work directly from
the denition and integrate, or use the rules and results weve already seen. Here are a few examples.
1. f(t) = t
2
. We can use our rule for t
n
with n = 2:
L{t
2
} =
2!
s
3
=
2
s
3
.
2. f(t) = te
3t
. We start with our rule for multiplying by t:
L{te
3t
} = L{(t)e
3t
} =
d
ds
L{e
3t
} =
d
ds
_
1
s + 3
_
=
_
1
(s + 3)
2
_
=
1
(s + 3)
2
.
3. f(t) = e
at
cos t, a < 0.
Because we have a rule for multiplying by e
at
we can do this in two steps, starting with the cos
term:
L{cos t} =
s
(s
2
+
2
)
= F
1
(s)
Then
L{e
at
cos t} = F
1
(s a) =
(s a)
((s a)
2
+
2
)
.
On the other hand, we could get the cos and sin versions of this result both at once by a single
integration if were not afraid of complex numbers:
L{e
(a+i)t
} =
_

0
e
st
e
(a+i)t
dt =
_

0
e
(as+i)t
dt
=
_
e
(as+i)t
(a +s +i)
_
t=0
provided a s and are not both 0
=
lim
M
e
(as+i)M
1
(a s +i)
=
1
(a s +i)
provided s > a
=
1
(s a i)
=
s a +i
((s a)
2
+
2
)
So, equating real and imaginary parts,
L{e
at
cos t} =
s a
((s a)
2
+
2
)
L{e
at
sin t} =

((s a)
2
+
2
)
.
53
5.0.13 Discontinuous function example
To transform a discontinuous function, we just carry out the integration over the dierent ranges sepa-
rately.
Example: The Heaviside function or step function.
-
6
1
0
-
6
1
0 t
0
H(t) =
_
0 t < 0
1 t 0
H(t t
0
) =
_
0 t < t
0
1 t t
0
To nd the Laplace transform:
L{H(t t
0
)} =
_

0
e
st
H(t t
0
) dt =
_
t0
0
e
st
0 dt +
_

t0
e
st
1 dt
=
_

t0
e
st
dt = lim
M
_
e
st
s
_
M
t=t0
= lim
M
_
e
st0
s

e
sM
s
_
=
e
t0s
s
providing s > 0.
54
5.0.14 Inverting the Laplace transform
There is a systematic method of inverting Laplace transforms: but it involves integration in the complex
plane and we wont cover it in this course.
Instead, we will use a combination of look-up tables and spotting a pattern to invert the functions
we come across. Essentially, if you are given F(s) and you want to nd the function it came from, try
various likely functions f(t) and when you nd one that works, youre there.
Examples
1.
F(s) =
4
s
2
+ 16
=
4
s
2
+ 4
2
This matches exactly one of our standard forms: so
f(t) = L
1
{F(s)} = L
1
_
4
s
2
+ 16
_
= sin(4t)
2.
F(s) =
30
s
2
+ 9
= 10
3
s
2
+ 3
2
Now we have a multiple of a standard form:
f(t) = L
1
{F(s)} = L
1
_
30
s
2
+ 9
_
= 10 sin(3t)
3.
F(s) =
2
s
2
= 2
1
s
2
= 2
1!
s
1+1
f(t) = L
1
{F(s)} = L
1
_
2
s
2
_
= 2t
1
= 2t.
4.
F(s) =
1
(s 3)
2
We can see that this is related to 1/s
2
, so lets look at that rst:
F
1
(s) =
1
s
2
f
1
(t) = t
F(s) = F
1
(s 3) f(t) = e
3t
f
1
(t) = te
3t
.
Recap
We are looking at ways to nd the inverse Laplace transform of complicated functions, using just our
rules 1-7 and results 1-8 (on handout 17).
Here is the nal example:
5.
F(s) =
s + 5
s
2
+ 8s + 24
In this case we need to rearrange and spot patterns:
F(s) =
s + 5
(s + 4)
2
16 + 24
=
s + 5
(s + 4)
2
+ 8
=
(s + 4) + 1
(s + 4)
2
+ (2

2)
2
=
s + 4
(s + 4)
2
+ (2

2)
2
+
1
(s + 4)
2
+ (2

2)
2
=
s + 4
(s + 4)
2
+ (2

2)
2
+
1
2

2
2

2
(s + 4)
2
+ (2

2)
2
55
Now let z = s + 4 and a = 2

2:
F(s) =
z
z
2
+a
2
+
1
2

2
a
z
2
+a
2
Now these two look familiar: consider two known results:
f
1
(t) = cos (at) F
1
(s) =
s
s
2
+a
2
f
2
(t) = sin (at) F
2
(s) =
a
s
2
+a
2
We can combine them:
F(s) = F
1
(z) +
1
2

2
F
2
(z) = F
1
(s + 4) +
1
2

2
F
2
(s + 4)
Now we can use the adding-functions rule:
L
1
{F(s)} = L
1
{F
1
(s + 4)} +
1
2

2
L
1
{F
2
(s + 4)}
= e
4t
f
1
(t) +e
4t
1
2

2
f
2
(t)
= e
4t
cos (2

2 t) +e
4t
1
2

2
sin (2

2 t).
5.0.15 Applications
Although you already know how to solve a linear ODE with constant coecients, well look at a couple
of examples to see how Laplace transform methods work with them; then well move onto something you
maybe cant solve already.
Example
df
dt
= f(t) f(0) = f
0
Taking Laplace transforms:
L
_
df
dt
_
= L{f(t)} = L{f(t)}
sF(s) f(0) = F(s)
(s +)F(s) = f(0) F(s) =
f
0
(s +)
So we had a simple algebraic system to solve for F(s); now we just need to invert to get the solution for
f(t):
f(t) = L
1
{F(s)} = L
1
_
f
0
(s +)
_
= f
0
L
1
_
1
(s +)
_
= f
0
e
t
.
This particular example arises in, for example, radioactive decay, where the rate of loss of material is
proportional to the amount of material left.
Example
d
2
f
dt
2

df
dt
6f = 0 f(0) = 2,
df
dt
(0) = 1.
We take the Laplace transform of the whole equation:
L
_
d
2
f
dt
2
_
L
_
df
dt
_
6L{f} = 0
56
s
2
F(s) sf(0) f

(0) {sF(s) f(0)} 6F(s) = 0


Now we put in the parts we know: f(0) and f

(0):
s
2
F(s) 2s + 1 {sF(s) 2} 6F(s) = 0
(s
2
s 6)F(s) = 2s 3 F(s) =
2s 3
(s
2
s 6)
In order to get this in a form we can invert, we use partial fractions (remember E001 last year!):
2s 3
(s
2
s 6)
=
2s 3
(s 3)(s + 2)
=
A
(s 3)
+
B
(s + 2)
Multiplying out gives:
2s 3 = A(s + 2) +B(s 3)
so at s = 3 we see 3 = 5A and at s = 2, 7 = 5B so
F(s) =
3
5(s 3)
+
7
5(s + 2)
and f(t) =
3
5
e
3t
+
7
5
e
2t
.
5.0.16 Systems of linear ODEs
If we have more than one ODE at once, like ordinary simultaneous equations but with derivatives as
well, then the standard methods you learnt last year cant be used; but Laplace transforms save the day.
Example
y

+z +y = 0 z

+y

= 0
y
0
= 0 y

0
= 0 z
0
= 1
We take the Laplace transform of both equations: let the transform of y(t) be Y (s) and the transform
of z(t) be Z(s).
s
2
Y (s) sy
0
y

0
+Z(s) +Y (s) = 0 sZ(s) z
0
+sY (s) y
0
= 0
s
2
Y (s) 0 0 +Z(s) +Y (s) = 0 sZ(s) 1 + sY (s) 0 = 0
This is now a pair of simultaneous equations for Y and Z:
(s
2
+ 1)Y (s) + Z(s) = 0
sY (s) + sZ(s) = 1
_
(s
2
+ 1) 1
s s
__
Y (s)
Z(s)
_
=
_
0
1
_
and we could solve it like all the matrix systems we did at the beginning of term, or just try to eliminate
a variable the easiest way possible.
r
2
sr
1
: [s s(s
2
+ 1)]Y (s) = 1 Y (s) =
1
s
3
.
Substituting back into r
2
:

1
s
2
+sZ(s) = 1 Z(s) =
1
s
+
1
s
3
.
Inverting:
y(t) = L
1
{Y (s)} = L
1
_

1
s
3
_
=
1
2
L
1
_
2
s
3
_
=
1
2
t
2
z(t) = L
1
{Z(s)} = L
1
_
1
s
+
1
s
3
_
= L
1
_
1
s
_
+
1
2
L
1
_
2
s
3
_
= 1 +
1
2
t
2
57
Example
z

+y

= cos t z
0
= 1 y
0
= 1
y

z = sin t z

0
= 1 y

0
= 0
Again, we transform the dierential equations:
L{z

} +L{y

} = L{cos t} L{y

} L{z} = L{sin t}
s
2
Z(s) sz
0
z

0
+sY (s) y
0
=
s
(s
2
+ 1)
s
2
Y (s) sy
0
y

0
Z(s) =
1
(s
2
+ 1)
s
2
Z(s) +s + 1 +sY (s) 1 =
s
(s
2
+ 1)
s
2
Y (s) s Z(s) =
1
(s
2
+ 1)
_
s s
2
s
2
1
__
Y (s)
Z(s)
_
=
_
s/(s
2
+ 1) s
1/(s
2
+ 1) +s
_
=
_
s
3
/(s
2
+ 1)
(s
3
+s + 1)/(s
2
+ 1)
_
sr
1
r
2
:
(s
3
+ 1)Z(s) = s
_
s
3
(s
2
+ 1)
_

(s
3
+s + 1)
(s
2
+ 1)
=
s
4
s
3
s 1
(s
2
+ 1)
=
(s + 1)(s
3
+ 1)
(s
2
+ 1)
so
Z(s) =
(s + 1)
(s
2
+ 1)
=
s
(s
2
+ 1)

1
(s
2
+ 1)
Substituting back into r
1
:
sY (s)
s
2
(s + 1)
(s
2
+ 1)
=
s
3
(s
2
+ 1)
Y (s) =
s
(s
2
+ 1)
Inverting back, these are easily related to standard forms:
y(t) = L
1
{Y (s)} = L
1
_
s
(s
2
+ 1)
_
= cos t
z(t) = L
1
{Z(s)} = L
1
_

s
(s
2
+ 1)

1
(s
2
+ 1)
_
= cos t sin t.
58
Chapter 6
Series
Sometimes we have to deal with functions that are unpleasant in some way: maybe the functional form
is well-behaved but its too complicated and messy algebraically to carry right through a calculation; or
maybe the function itself has corners or jumps.
Some functions we meet in real situations are not continuous; others are not smooth and so not dier-
entiable.
In the next two weeks well see two ways to approximate a function f(x) with a series F(x). The rst,
Taylor series, is good for smooth functions; the second, Fourier series, works for functions with jumps in
as long as they repeat over some range of x.
6.1 Taylor series
The Taylor series provides the simplest form of function, a polynomial, as an approximation to any
smooth function.
The simple version is called a Maclaurin series, and is a power series for f(x), giving a good approximation
for small values of x:
F(x) = a
0
+a
1
x +a
2
x
2
+a
3
x
3
+ =

n=0
a
n
x
n
If we want to nd such a series for f(x), how do we nd the terms? We assume that F(x) = f(x) close
to x = 0, and look at the coecients a
n
in sequence.
Finding a
0
is easy: just put x = 0. f(0) = a
0
.
Now dierentiate both sides of the equation:
f

(x) = a
1
+ 2a
2
x + 3a
3
x
2
+ 4a
4
x
3
+
and a
1
becomes easy: f

(0) = a
1
.
We can repeat this process
f

(x) = 2a
2
+ 3 2a
3
x + 4 3a
4
x
2
+ 5 4a
5
x
3
+
so f

(0) = 2a
2
and again
f
(3)
(x) = 3 2a
3
+ 4 3 2a
4
x + 5 4 3a
5
x
2
+
which gives f
(3)
(0) = 3!a
3
. . .
. . . and after n times we get the general term:
f
(n)
(0) = n!a
n
59
The innite sum we end up with if we keep going forever is
F(x) =

n=0
f
(n)
(0)
x
n
n!
where were using the notation f
(n)
(x) to mean the n-th derivative of f(x).
Example: Maclaurin series for 1/(1 x)
Lets try an example. Well work out four terms, so well need three derivatives. Lets do those rst.
f(x) =
1
(1 x)
f

(x) =
1
(1 x)
2
f

(x) =
2
(1 x)
3
f

(x) =
6
(1 x)
4
Then we need the values as
f(0) = 1 f

(0) = 1 f

(0) = 2 f

(0) = 6
Finally we can put them together:
F(x) =
1
1
+
1
1
x +
2
2
x
2
+
3
3
x
3
+ = 1 +x +x
2
+x
3
+
Convergence
In fact, in the example above, every coecient is a
n
= n! and the whole power series is
F(x) =

n=0
x
n
.
You know from MATH6501 that this series doesnt converge for all values of x. We have to be careful
with these innite series!
Its obvious that if we put x = 1 the sum becomes
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
which grows without bound as we keep adding more terms. We might have guessed the series would
have a problem here, because the original function f(x) = 1/(1 x) is singular at x = 1.
Its less obvious that there will be a problem at x = 1. Looking at the original function, we expect
f(1) =
1
(1 (1))
=
1
2
but if we look at the power series, we have:
1 1 + 1 1 + 1 1 + 1 1 + 1 1 + 1 1 + 1
which is a sum alternating between 1 and 0 as we keep adding new terms.
Lets look at the form of the partial sum functions we get from stopping at the n-th term of the series.
-1
0
1
2
3
4
-1 -0.5 0 0.5 1
1 term
2 terms
7 terms
8 terms
Original function
60
As you can see, as long as we arent too close to the disastrous values of 1 and 1, the series give a good
approximation to the original function. By the time we get to 7 or 8 terms of the series, we cant see
the dierence between the approximation and the real function for x between 1/2 and 1/2.
The safe distance we can go from x = 0 before the series fails is called the radius of convergence.
In this case the radius of convergence is 1: for any value of x with |x| < 1, the innite series converges
to the value of f(x).
Example: sin x
Here the derivatives repeat really quickly:
f(x) = sin x f

(x) = cos x f

(x) = sinx f

(x) = cos x
and when we put in x = 0, half of them disappear:
f(0) = 0 f

(0) = 1 f

(0) = 0 f

(0) = 1
Now only the terms with an odd number of derivatives count: so we look at terms n = 2m+ 1:
F(x) = x
x
3
3!
+
x
5
5!
=

m=0
f
(2m+1)
(0)
x
2m+1
(2m+ 1)!
=

m=0
(1)
m
x
2m+1
(2m+ 1)!
.
In fact, this series converges for all values of x: it has an innite radius of convergence. Lets see
the rst few terms in action.
-1.5
-1
-0.5
0
0.5
1
1.5
-4 -2 0 2 4
sin(x)
1 term
2 terms
3 terms
4 terms
The Taylor Series
Now suppose we want a series which is at its best somewhere other than x = 0: say, at x = a. There is
a very natural extension to the Maclaurin series, called the Taylor series:
F(x) =

n=0
f
(n)
(a)
(x a)
n
n!
Example: ln x near x = 2
Just as for the Maclaurin series, we start by computing some derivatives:
f(x) = ln x f

(x) =
1
x
f

(x) =
1
x
2
f

(x) =
2
x
3
Now we evaluate these at x = 2:
f(2) = ln 2 f

(2) =
1
2
f

(2) =
1
4
f

(2) =
1
4
61
which gives the beginning of the Taylor series as
F(x) = ln 2 +
1
2
(x 2)
1
8
(x 2)
2
+
1
24
(x 2)
3
+
Lets look at this approximation:
-1
-0.5
0
0.5
1
1.5
2
0.5 1 1.5 2 2.5 3 3.5 4
Original function
1 term
2 terms
3 terms
4 terms
In fact for n 1 we can spot the form of the derivative:
f
(n)
(x) =
(1)
n1
(n 1)!
x
n
which gives the full Taylor series as
F(x) = ln 2 +

n=1
(1)
n1
(n 1)!
2
n
(x 2)
n
n!
= ln 2 +

n=1
(1)
n1
(x 2)
n
n2
n
This Taylor series has radius of convergence 2: so its OK as long as 0 < x < 4.
Example: polynomial
Finally, well look at a polynomial example, f(x) = x
3
3x. For a cubic function like this one, if we
take four terms of the Taylor series about any point, we get the original function back. Well look at the
Taylor series about three points: x = 1, 0 and 1. First, as ever, we need the derivatives:
f(x) = x
3
3x f

(x) = 3x
2
3 f

(x) = 6x f

(x) = 6
Now well look at each point in turn. At x = 1:
f(1) = 2 f

(1) = 0 f

(1) = 6 f

(1) = 6
so
F(x) = 2 6
(x + 1)
2
2!
+ 6
(x + 1)
3
3!
= 2 3(x + 1)
2
+ (x + 1)
3
.
At x = 0:
f(0) = 0 f

(0) = 3 f

(0) = 0 f

(0) = 6
so
F(x) = 3x +x
3
.
Finally, at x = 1:
f(1) = 2 f

(1) = 0 f

(1) = 6 f

(1) = 6
so
F(x) = 2 + 6
(x 1)
2
2!
+ 6
(x 1)
3
3!
= 2 + 3(x 1)
2
+ (x 1)
3
All three versions of F(x) are the same function; but the shorter versions are three dierent approxima-
tions:
62
-3
-2
-1
0
1
2
3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Again, the radius of convergence here is innite.
63
Chapter 7
Revision
The exam will be in the following format. For the Biochemical and Chemical Engineers who took the
entire course there will be a 2 hour exam, consisting of six questions. The best four answers will be
counted. For those of you who are from Civil Engineering, there will be a one hour exam consisting of
three questions, where the best two answers will count.
The following topics will be in the two hour exam;
Complex Numbers
Dierential Equations
Matrix Algebra and Solving Linear Systems
Eigenvalues and Eigenvectors
Laplace Operators
Taylor Series
In the one hour Civil Engineering exam the following topics will be covered;
Eigenvalues and Eigenvectors
Laplace Operators
Taylor Series
The problem with looking at past papers for this course is that the format has changed very rapidly in
recent years. I will write a practice exam in time for revision in the Easter holidays.
7.1 Complex Numbers
With regards to questions on complex numbers questions will come one of three main forms,
1. Simplify an expression into the form z = a +ib e.g.
1
1+i
2. Find some n-root of a number e.g. z
4
= 1
3. Find the modulus and/or argument of a complex number
4. Use De-Moivres theorem in any of the above.
64
7.2 Dierential Equations
Second Order dierential Equations with constant coecients. Remember there is a very good order to
solving these dierential equations.
1. First check if it is homogeneous or inhomogeneous.
2. If it is inhomogeneous set f(x) = 0 then proceed as if homogeneous.
3. Write down the auxiliary equation and solve for
4. Once you have this you know the complimentary function.
5. If it was originally inhomogeneous, try particular solutions to match the form of f(x). Dont forget
if it is an exponential or a Trigonometric function you might have repeated roots and you have try
adding an x!
6. Finally once you have found the particular solution you can add this to the complimentary function
to nd the general solution.
7. You should have only two constants of integration which can be found if I provide boundary/initial
conditions.
7.3 Summary of Linear Systems
At rst we examined the properties of matrices. Remember there are many types of matrices some of
which are important than others. Finally, we looked at dening the determinant. This then leads into
linear systems. Which are very similar to dierential equations
1. We start with Ax = b
2. If b = 0 then we have an inhomogeneous system. If b = 0 then it is a homogeneous system.
3. We need to solve this for x.
4. If |A| = 0 then a unique solution exists, if |A| = 0 then we either have an innite number of
solutions or no solution exists at all.
5. In the case of b = 0 and |A| = 0 then trivially the unique solution is x = 0.
6. The most interesting cases are either
(a) b = 0 and |A| = 0.
(b) Or b = 0 and |A| = 0.
7. In order to nd the solution, then we must either use Gaussian elimination, or in the case of |A| = 0
we can nd A
1
by using either the cofactor method or Gauss-Jordan elimination.
7.4 Eigenvectors and Eigenvalues
Studying eigenvectors and eigenvalues is very important in engineering. This is where we solve the
equation;
A v = v
We can nd the eigenvalues and eigenvectors of this system by solving the following quadratic
|A I| = 0
It A is a square n n matrix with n distinct eigenvalues then we can nd n linearly independent
eigenvectors. There are some caveats and special cases
If A is symmetric and the eigenvalues are distinct than the eigenvectors will be orthogonal.
If A is symmetric and some of the eigenvalues are repeated than the eigenvectors will be linearly
independent but not necessarily orthogonal.
If A isnt symmetric but does have repeated eigenvalues then you may be unable to nd n linearly
independent eigenvalues.
In order to nd the eigenvectors you rst solve the polynomial in then for each eigenvalue you place
in the following equation and solve it,
_
A
i
I
_
v
i
= 0
We solve this by using Gaussian elimination and then back substitution. Since we are essentially solving
a homogeneous equation
A

v = 0,
where A

= A I. We will have free parameters and therefore an innite number of solutions. They
are all related by a rescaling, and there it is only the direction in some sense that is important about
eigenvectors. If we have repeated eigenvalues than there are two or more free parameters and you have
to be careful to choose linearly independent vectors, or orthogonal (if possible).
7.5 Laplace Transforms
This is probably the part of the course which is most alien to you all. Do not fret. It is straight forward.
The basic premise is you transform functions f(t), which are dependent on t to equations F(s) which
are dependent on s. The transform takes the form,
L{f(t)} =
_

0
e
st
f(t) dt = F(s)
This seems very arbitrary at rst. However, it becomes very useful once you know the major results and
rules. Study these and you can not go wrong. Once you know how to go forward you can make inverse
transforms to retain the original functions f(t).
f(t) = L
1
{F(s)}
The full potential of the Laplace transform is realised when you use them to solve dierential equations.
In particular we used them to solve systems of linear ordinary dierential equations.
7.6 Taylor Series
We didnt spend very long on this but it is very basic. Sometimes it is more benecial to approximate
a function than use the full form. For specic functions we are able to do this using the Taylor series.
If we would like to approximate something close to 0 then we use the Maclaurin series.
F(x) =

n=0
f
(n)
(0)
x
n
n!
However if we would like to now the approximation near a particular value a then use
F(x) =

n=0
f
(n)
(a)
(xa)
n
n!
Follow these steps,
1. Start with a function f(x) = sin(x)
2. Find the rst few derivatives, typically four are enough. e.g. f

(x) = cos(x)
3. Plug in the particular values you are interested in. e.g. f(0) = 0, f

(0) = 1
4. Then place them into the summation above. This denes your approximate function.
Although you do not need to nd the radius of convergence you must be aware that there are limits on
the values of x for which the series is valid and values for which it is not. This is to do with limits not
always converging to a value.

Potrebbero piacerti anche