Sei sulla pagina 1di 48

Mathematics I

(Part of 4CCP 1350)


Department of Physics, Kings College London
Dr J. Alexandre, 2009/2010
Contents
1 Introduction 3
1.1 Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Functions of a real variable 5
2.1 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Dierentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Polynomial functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Rational functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Trigonometric functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Integration 12
3.1 Interpretation of the integral . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Integration by part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Change of variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Improper integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Logarithm and exponential functions 18
4.1 Logarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2 Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Taylor expansions and series 22
5.1 Approximation of a function around a value of the argument . . . . . . . . 22
5.2 Radius of convergence and series . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4 Expansion for a composition of functions . . . . . . . . . . . . . . . . . . . 24
1
6 Vector calculus 25
6.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Rotations in two dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.4 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.5 Scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.6 Polar coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7 Complex numbers 30
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2 Complex exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3 Trigonometric formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.4 Roots of complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.5 Relation to hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . 32
8 Linear dierential equations 33
8.1 First order, homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.2 Variation of parameters method . . . . . . . . . . . . . . . . . . . . . . . . 34
8.3 Second order, homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.4 Second order, non-homogeneous . . . . . . . . . . . . . . . . . . . . . . . . 37
8.5 General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
8.6 Separation of variables method . . . . . . . . . . . . . . . . . . . . . . . . 37
9 Linear algebra 38
9.1 Linear function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
9.3 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
9.4 Composition of linear functions . . . . . . . . . . . . . . . . . . . . . . . . 40
9.5 Eigenvectors and eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10 Functions of several variables 43
10.1 Partial dierentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.2 Dierential of a function of several variables . . . . . . . . . . . . . . . . . 44
10.3 Implicit functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
10.4 Double integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
10.5 Triple integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2
1 Introduction
1.1 Numbers
Natural numbers N These are all positive integers, including 0.
Integers Z These are the elements of N, plus the negative integers.
Rational numbers Q These are all the numbers which can be written p/q, where p
and q = 0 are elements of Z. These numbers have either a nite number of decimals
or a periodic innite number of decimals, for example
1 . 3795 3795 3795 3795
Q contains Z, which is obvious if one takes q = 1.
Real numbers R These are the elements of Q plus all the numbers with innite and
random decimals. Examples of real numbers, which are not in Q are:

2, , e, ....
Density property: Between any two real numbers can be found a rational number, and vice
verse.
1.2 Group
A group G is a set of elements {a}, together with an operation , such that:
if a, b are two elements of G, then a b is element of G;
G contains a unit element u such that, for any element a of G, a u = a;
for any element a of G, there is an element a such that a a = u.
Examples of groups: {Z, +}, or {Q

, }, where Q

is Q without 0.
1.3 Combinatorics
Permutations The number of ways to choose an order for n elements is the factorial
n! = n (n 1) (n 2) 3 2 1.
Indeed, there are n possibilities for the rst element, and for each of these possibilities,
there are n 1 for the second element, etc...
Combinations The number of ways to choose k elements out of n, independently
of the order, is given by the binomial coecients
_
n
k
_
=
n!
k!(n k)!
.
3
Indeed, the number of possible ways to order n points is n!, and has to be divided
by the number of ways to order the k chosen elements, which is k!, and also by the
number of ways to order the remaining n k elements, which is (n k)!
Some simple properties are:
_
n
n k
_
=
_
n
k
_
,
_
n
1
_
= n,
_
n
0
_
= 1.
Binomial formula. We show here that
(a + b)
n
=
k=n

k=0
_
n
k
_
a
nk
b
k
, (1)
where n is an integer and a, b are real numbers, using a proof by induction:
First step: check that eq.(1) is valid for a given value of n, for example n = 2:
(a + b)
2
= a
2
+ 2ab + b
2
=
_
2
0
_
a
2
b
0
+
_
2
1
_
a
1
b
1
+
_
2
2
_
a
0
b
2
=
k=2

k=0
_
2
k
_
a
2k
b
k
.
Second step: suppose that eq.(1) is valid for n, and show that it is then valid for
n + 1:
(a + b)
n+1
= (a + b)
k=n

k=0
_
n
k
_
a
nk
b
k
=
k=n

k=0
_
n
k
_
a
nk+1
b
k
+
k=n

k=0
_
n
k
_
a
nk
b
k+1
=
k=n

k=0
_
n
k
_
a
n+1k
b
k
+
k=n+1

k=1
_
n
k 1
_
a
n+1k
b
k
= a
n+1
+ b
n+1
+
k=n

k=1
__
n
k
_
+
_
n
k 1
__
a
n+1k
b
k
= a
n+1
+ b
n+1
+
k=n

k=1
_
n + 1
k
_
a
n+1k
b
k
=
k=n+1

k=0
_
n + 1
k
_
a
n+1k
b
k
.
4
2 Functions of a real variable
A function of a real variable f is an operation, which to a real variable x associates the
quantity f(x).
2.1 Continuity
Intuitively, a function f of the variable x is continuous is a small change in x leads to a
small change in f(x). More rigorously, f is continuous in x
0
if for any > 0, one can
always nd a > 0 such that
|x x
0
| < |f(x) f(x
0
)| < .
2.2 Dierentiation
The derivative of a function f at the point x is the slope of the tangent of the curve y = f(x)
at x. In order to calculate it, lets consider the points M and M

with coordinates (x, f(x))


and (x+x, f(x+x)) respectively, where dx > 0 is an increment (see g(1)). The slope
of the straight line (MM

) is
slope =
f(x + x) f(x)
(x + x) x
=
f(x + x) f(x)
x
.
The slope of the tangent of the curve at M is obtained when x 0. The derivative of
f at the point x is then
f

(x) = lim
x0
f(x + x) f(x)
x
=
df
dx
, (2)
where dx denotes the innitesimal increment in x and df the corresponding innitesimal
increment in f(x).
Example Let us calculate the derivative of f(x) = ax
n
, where a is a constant and n is an
integer. By denition
f

(x) = lim
x0
a(x + x)
n
ax
n
x
= lim
x0
a[x
n
+ nx
n1
x + n(n 1)x
n2
(x)
2
/2 + ] ax
n
x
= a lim
x0
_
nx
n1
+ n(n 1)x
n2
x +
_
= anx
n1
,
where the dots represent higher orders in x.
5
x
f(x)
x+ x
f(x+ x)

Figure 1: The derivative is the slope of the tangent.


Eq, (2) denes the right derivative, for x > 0. One can also dene the left derivative
by
lim
x0
f(x) f(x x)
x
,
where x > 0. A function f is said to be dierentiable at x if these two denitions lead
to the same result. If these two derivatives are dierent, the function is singular at the
point x and its derivative is not dened. An example of such a singularity is the function
f(x) = |x| at x = 0. Indeed, for x = 0, the left derivative is -1 and the right derivative is
1.
Note that a function can be continuous but not dierentiable for a given value of x, as
shows the previous example.
Extrema of a function Since the derivative f

(a) of a function at the point a corresponds


to the slope of the tangent of the curve of equation y = f(x), we have the following
classication:
if f

(a) > 0, then f is increasing in the vicinity of a;


if f

(a) < 0, then f is decreasing in the vicinity of a;


if f

(a) = 0 and f(x) changes sign at x = a, then f(a) is an extremum of f;


if f

(a) = 0 and f(x) does not change sign at x = a, then the point of coordinates
(a, f(a)) is called an inexion point. At such a point, the second derivative changes
sign.
6
Derivative of a product If f, g are two functions of x, the derivative (fg)

is given by
(fg)

(x) = lim
x0
f(x + x)g(x + x) f(x)g(x)
x
= lim
x0
f(x + x)g(x) f(x)g(x) + f(x + x)g(x + x) f(x + x)g(x)
x
= lim
x0
_
g(x)
f(x + x) f(x)
x
+ f(x + x)
g(x + x) g(x)
x
_
= f

(x)g(x) + f(x)g

(x). (3)
Chain rule Consider two functions f and g, and the function F dened as F(x) = f(g(x)).
The derivative of F is
F

(x) = lim
x0
F(x + x) F(x)
x
= lim
x0
f(g(x + x)) f(g(x))
dx
= lim
x0
f(g(x + x)) f(g(x))
g(x + x) g(x)

g(x + x) g(x)
x
= lim
x0
f(g + g) f(g)
g

g(x + x) g(x)
x
= f

(g(x)) g

(x),
where g = g(x + x) g(x) is the increment in g(x) corresponding to x x + x.
Derivative of a ratio The derivative of 1/x is 1/x
2
, such that the derivative of the
function 1/f is
_
1
f(x)
_

=
1
f
2
(x)
f

(x) =
f

(x)
f
2
(x)
.
As a consequence, the derivative of the ratio of the functions f and g is
_
f(x)
g(x)
_

= f

(x)
1
g(x)
+ f(x)
_

(x)
g
2
(x)
_
=
f

(x)g(x) f(x)g

(x)
g
2
(x)
.
Derivative of an inverse function If y = f(x), the inverse function f
1
, when it exists,
is dened by x = f
1
(y). Do not confuse the inverse function f
1
with 1/f!. In order to
dene the inverse of a function, one needs a one-to-one mapping between x and y. This is
usually the case on a given interval for x at least.
The derivative of the inverse is then
_
f
1
(y)
_

= lim
y0
f
1
(y + y) f
1
(y)
y
7
= lim
x0
x + x x
f(x + x) f(x)
=
1
f

(x)
,
where x is dened such that y + y = f(x + x).
2.3 Polynomial functions
A polynomial function of x is of the form
P(x) =
n=N

n=0
a
n
x
n
,
where a
n
are the coecients and N is the degree of the polynomial.
If N is odd, the polynomial has at least one zero. Indeed, we have then
lim
x
P(x) = and lim
x+
P(x) = +,
such that the line representing y = P(x) cuts at least once the axis y = 0, since the
polynomial is a continuous function.
A polynomial of degree 2 can have two poles, but might not have any (real) pole:
P(x) = a(x z
1
)(x z
2
) has two poles z
1
, z
2
. The pole is double if z
1
= z
2
;
Q(x) = ax
2
+ bx + c has no pole if b
2
4ac < 0.
A polynomial of degree 3 has either one pole or three poles, and can be written, for all x,
P(x) = a(x z
1
)(x z
2
)(x z
3
) if P has three poles;
Q(x) = (x z)(ax
2
+ bx + c), with b
2
4ac < 0, if Q has one pole z.
In general, any polynomial function can be written
P(x) = (x z
1
) (x z
n
) (a
1
x
2
+ b
1
x + c) (a
m
x
2
+ b
m
x + c
m
),
where z
i
, i = 1, ..., n are the poles of the polynomial, b
2
j
4a
j
c
j
< 0 for all j = 1, ..., m,
and n + 2m is the degree of the polynomial.
2.4 Rational functions
A rational function is the ratio of two polynomial functions P and Q, and has the form,
for each x,
R(x) =
P(x)
Q(x)
.
8
x
y
z z z
1
2 3
Figure 2: A polynomial function of degree 5, with three poles z
1
, z
2
, z
3
.
x
y
z z
z z
1 2
4 3
Figure 3: A polynomial function of degree 6, with four poles z
1
, z
2
, z
3
, z
4
.
9
1
1
x
cos x
sin x
M
Figure 4: The coordinates of M on the trigonometric circle are (cos x, sin x).
If the degree of P is less than the degree of Q, It is always possible to reduce R as a sum
of irreducible rational functions of the form a/(x z) or (ax + b)/(x
2
+ cx + d).
Example The fraction (x + 2)/(x
2
+ 5x + 4) can be written
x + 2
x
2
+ 5x + 4
=
x + 2
(x + 1)(x + 4)
=
a
x + 1
+
b
x + 4
,
where a(x +4) +b(x +1) = x +2, such that a +b = 1 and 4a +b = 2, which gives a = 1/3
and b = 2/3. Finally,
x + 2
x
2
+ 5x + 4
=
1/3
x + 1
+
2/3
x + 4
.
2.5 Trigonometric functions
For a given angle 0 x 2, sin x and cos x are dened as the coordinates of the point
M at the intersection of the straight line (OM) with the trigonometric circle (see g(4)).
Property Using Pythagoras theorem, we have sin
2
x + cos
2
x = 1.
Trigonometric formula It will be shown in the chapter on vector calculus (subsection
10
6.2) that the sine and cosine of the sum of two angles are given by
sin(a + b) = sin a cos b + sin b cos a
cos(a + b) = cos a cos b sin a sin b. (4)
Important limit We will now show, geometrically, that
lim
x0
sin x
x
= 1,
and this limit will be very useful in deriving fundamental properties of the trigonometric
functions.
Proof: From the denition of sine and cos, one can see on g.(5) that
sin x x sin x + 1 cos x. (5)
But one can also see that
0 sin
2
x + (1 cos x)
2
x
2
,
such that
0 1 cos x
x
2
2
.
Using this in the inequalities (5), we obtain
sin x
x
1
sin x
x
+
x
2
,
and the only possibility for this to be valid in the limit x 0 is to have
sinx
x
1.
Derivative of trigonometric functions The rst important consequence of the previous
limit is the calculation of the derivative of the sine. From eq.(4) we have
sin

x = lim
x0
sin(x + x) sin x
x
= lim
x0
sin xcos(x) + sin(x) cos x sin x
x
= lim
x0
_
cos x
sin(x)
x
+ sin x
cos(x) 1
x
_
.
We have seen that 1 cos(x) is of order x
2
, and therefore the second term vanishes in
the limit x 0, whereas the rst term leads to
(sin x)

= cos x.
In the same way, one can easily show that (cos x)

= sin x. As a consequence, we also


have (tanx)

= 1 + tan
2
x.
11
a
b
c
x
Figure 5: On the gure: a = sin x, b = 1 cos x and c
2
= a
2
+ b
2
3 Integration
The integration corresponds to the inverse operation of the dierentiation: F is a primitive
of f if
F(x) =
_
f(x)dx F

(x) = f(x).
We have
_
b
a
f(x)dx = F(b) F(a),
and therefore
_
x
0
f(u)du = F(x) F(0).
Make sure never to use the same name for the variable of integration and the limit of the
integral.
From the linearity of the dierentiation, integrals have the following properties:

_
a
b
f(x)dx =
_
b
a
f(x)dx

_
c
a
f(x)dx =
_
b
a
f(x)dx +
_
c
b
f(x)dx

_
b
a
[c
1
f
1
(x) + c
2
f
2
(x)]dx = c
1
_
b
a
f
1
(x)dx + c
2
_
b
a
f
2
(x)dx, where c
1
, c
2
are constants.
12
a b
dx
x
f(x )
k
k
Figure 6: Riemann denition of the integral
3.1 Interpretation of the integral
As explained on g.(6), the Riemann denition of
_
b
a
f(x)dx, b > a, corresponds to the
surface area between the line y = f(x) and the straight line y = 0, from x = a to x = b.
Indeed, this area can be seen as the sum of the innitesimal areas dx f(x), and we have
_
b
a
f(x)dx = lim
n
_
b a
n
n1

k=0
f(x
k
)
_
where
x
k
= a + k
b a
n
, k = 0, ..., n 1,
Equivalence with the denition based on the derivative. We show here that the
Riemann denition of the integral, as a surface area, is equivalent to the denition given
previously. From the Riemann interpretation, the quantity
F(x) =
_
x
a
du f(u)
corresponds to the surface area of between the lines y = f(x) and y = 0, from a to x. The
integral from a to x + x is then
F(x + x) =
_
x+x
a
du f(u),
13
and the derivative of the function F is
F

(x) = lim
x0
1
x
__
x+x
a
du f(u)
_
x
a
du f(u)
_
= lim
x0
1
x
_
x+x
x
du f(u).
The latter expression corresponds to the surface area between the lines y = f(x) and y = 0
from x to x + x, which is equal to x f(x)+ higher powers in x. As a consequence,
we obtain the expected result:
F

(x) = lim
x0
1
x
_
xf(x) + (x)
2

_
= f(x).
As a consequence of this interpretation of the integral, if two functions f, g satisfy
f(x) g(x) for a x b,
then
_
b
a
f(x)dx
_
b
a
g(x)dx.
3.2 Integration by part
The derivative of the product of two functions f, g is (fg)

= f

g+fg

, such that we obtain,


after integration
f(x)g(x) =
_
f

(x)g(x)dx +
_
f(x)g

(x)dx,
which can be helpful to calculate one of the integrals on the right hand side, if we know
the other:
_
b
a
f

(x)g(x)dx = [f(x)g(x)]
b
a

_
b
a
f(x)g

(x)dx
Example Integration by part is very useful for the integration of trigonometric functions
multiplied by power law functions, as
_
dx x cos x =
_
dx x(sin x)

= xsin x
_
dxsin x = xsin x + cos x + c,
where c is a constant.
3.3 Change of variable
Suppose that one can write x = g(u), where u represents another variable with which the
integral can be calculated. We have then dx = g

(u)du and
_
b
a
f(x)dx =
_
g
1
(b)
g
1
(a)
f(g(u))g

(u)du,
14
where g
1
represents the inverse function of g: x = g(u) u = g
1
(x). For the change of
variable to be consistent, one must make sure that there is a one-to-one relation between
x and u in the interval [a, b].
Example In the following integral, one makes the change of variable u = sin ,
for 0 /2:
_
1
0
du

1 u
2
=
_
/2
0
cos d
_
1 sin
2

=
_
/2
0
d =

2
.
Note that, in the interval [0, /2], we have
_
cos
2
= | cos | = cos ,
since cos > 0.
3.4 Improper integrals
The domain of integration of an integral might either contain a singular point, where
the function to integrate is not dened, or might not be bounded. In both cases, the
corresponding integral is said to be convergent if the result of the integration is nite, and
divergent if the result of the integration is innite. We describe here this situation for the
integration of power law functions.
Case of a non-compact domain of integration We rst show that the integral
I
1
=
_

1
dx
x
diverges. For this, one can see on a graph that
I
1
>

n=2
1
n
,
and we show, that the sum of the inverse of the integers is divergent.
Proof - from the 14th century! The sum of the inverses of integers, up to 2
N
, can be written:
2
N

n=1
1
n
= 1 +
_
1
2
_
+
_
1
3
+
1
4
_
+
_
1
5
+
1
6
+
1
7
+
1
8
_
+
_
1
9
+ +
1
16
_
+
and satises
2
N

n=1
1
n
> 1 +
_
1
2
_
+
_
1
4
+
1
4
_
+
_
1
8
+
1
8
+
1
8
+
1
8
_
+
_
1
16
+ +
1
16
_
+
15
The sum in each bracket is equal to 1/2, and there are N bracket, such that
2
N

n=1
1
n
> 1 +
N
2
,
which shows that the sum goes to innity when N goes to innity.
As a consequence,
_

1
dx
x
is divergent.
Consider now the integral, for a = 1,
I
a
=
_

1
dx
x
a
= lim
x
x
1a
1
1 a
As can be seen, the result depends on a:
if a > 1 then I
a
= 1/(a 1) is nite;
if a < 1 then I
a
= +.
Since the integral I
a
also diverges for a = 1, it converges only for a > 1.
Case of a singular point Consider the integral, for b = 1,
J
b
=
_
1
0
dx
x
b
= lim
x0
1 x
1b
1 b
As can be seen, the result depends on the power b:
if b < 1 then J
b
= 1/(1 b) is nite;
if b > 1 then J
b
= +.
The integral J
b
also diverges for b = 1 (the surface area is the same as the previous case,
with a non-compact domain of integration), it therefore converges only for b < 1. In
general, we have:
_
1
z
dx
(x z)
b
is
_
convergent if b < 1
divergent if b 1
Example Consider the integral
_

1
dx
(x 1)
b
(2x + 3)
a
16
at x = 1: the integrand is equivalent to 5
a
/(x 1)
b
, such that there is convergence
if b < 1;
at x = : the integrand is equivalent to 2
a
/x
a+b
, such that there is convergence if
a + b > 1;
As a consequence, the integral is convergent only if b < 1 and a + b > 1 simultaneously.
17
4 Logarithm and exponential functions
4.1 Logarithm
We have seen that, for a = 1,
_
x
a
dx =
x
a+1
a + 1
a = 1,
and we still have to dene this integral for a = 1. For this, we introduce the logarithm
as
lnx =
_
x
1
du
u
,
so that the logarithm gives the surface area between the function 1/u and the horizontal
axis, from 1 to x > 0. The real logarithm is not dened for x < 0, since the corresponding
surface area would be innite. The number e is dened by ln e = 1 and e 2.718281828.
Properties:
We have seen that the integrals
_

1
dx/x and
_
1
0
dx/x both diverge, such that
lim
x
ln x = + and lim
x0
ln x =
From the denition of the logarithm, one can see that
ln(ab) =
_
ab
1
du
u
=
_
a
1
du
u
+
_
ab
a
du
u
= ln a +
_
b
1
dv
v
= ln a + ln b, (6)
where we make the change of variable u = av.
One can also see that
ln(x
a
) =
_
x
a
1
du
u
=
_
x
1
a
dv
v
= a lnx, (7)
where we make the change of variable u = v
a
.
We have in general, for any dierentiable function f,
_
dx
f

(x)
f(x)
= ln |f(x)| + c,
where c is a constant
18
Logarithm in base a The logarithm in base a is dened as
log
a
(x) =
ln x
ln a
,
and is equal to 1 when x = a. Note that ln x = log
e
(x).
Integral of the logarithm To calculate
_
ln x dx, one uses an integration by parts:
_
lnx dx =
_
(x)

lnx dx = xln x
_
x(ln x)

dx = xln x x + c,
where c is a constant.
Limits
When x +: We show here the important limit
lim
x+
ln x
x
a
= 0, a > 0 (8)
which means that any (positive-) power law goes quicker to innity than the loga-
rithm, when x +.
Proof For any u 1 and for any a > 0, we have
1
u

1
u
1a/2
.
Integrating this inequality from 1 to x leads to
0 < ln x
2
a
(x
a/2
1) <
2
a
x
a/2
.
Dividing by x
a
gives the expected result:
0 <
ln x
x
a

2
a
x
a/2
0 when x +.
When x 0: Another important limit to know is
lim
x0
x
a
ln x = 0, a > 0, (9)
which means that any (positive-) power law kills the divergence of the logarithm at
x = 0.
Proof For any u satisfying 0 < u 1 and any a > 0, we have
1
u

1
u
1+a/2
.
Integrating this inequality from x to 1, we obtain
0 < ln x
2
a
(x
a/2
1) <
2
a
x
a/2
.
Multiplying by x
a
gives the expected result:
0 x
a
| lnx|
2
a
x
a/2
0 when x 0.
19
4.2 Exponential
The exponential is dened as the inverse function of the logarithm:
y = ln x x = exp y = e
y
From property (6), if we note u = ln a and v = ln b, we have
exp(u + v) = (exp u) (exp v),
and from property (7), if we note y = ln x, we have
_
exp y
_
a
= exp(ay).
Derivative of the exponential one can dierentiate the denition exp(lnx) = x, which,
using the chain rule, leads to exp

(ln x) = x. We therefore conclude that the derivative of


the exponential is the exponential itself:
exp

x = exp x.
Exponential of base a This function is dened as
a
x
= exp(xln a),
which is consistent with the properties of the logarithm and the exponential. Its derivative
is then
(a
x
)

= (lna)a
x
.
One can also dene the function x
x
, with derivative
(x
x
)

=
d
dx
_
exp(xln x)
_
= (1 + lnx)x
x
.
Limits
From the limit (8), if we note y = ln x and b = 1/a > 0, we have
lim
y+
exp y
y
b
= +,
and the exponential goes to innity quicker than any power law.
From the limit (9), if we note y = | ln x| and b = 1/a > 0, we have
lim
y+
y
b
exp(y) = 0,
and the decreasing exponential kills the divergence of any power law.
20
4.3 Hyperbolic functions
The hyperbolic functions are dened as
hyperbolic cosine: cosh x = (e
x
+ e
x
)/2;
hyperbolic sine: sinh x = (e
x
e
x
)/2;
hyperbolic tangent: tanhx = sinh x/ cosh x;
hyperbolic cotangent: coth x = cosh x/ sinh x,
and their derivatives are given by
cosh

x = sinh x
sinh

x = cosh x
tanh

x = 1 tanh
2
x
coth

x = 1 coth
2
x
It can easily be seen that, from their denition, the functions cosh and sinh satisfy, for all
x,
cosh
2
x sinh
2
x = 1.
Also, it can be easily checked that
cosh(2x) = cosh
2
(x) + sinh
2
(x)
sinh(2x) = 2 sinh(x) cosh(x)
21
5 Taylor expansions and series
5.1 Approximation of a function around a value of the argument
It is sometimes useful to approximate the value f(x) of a function f around f(x
0
). The
rst approximation consists in replacing f(x) by a linear function p
1
(polynomial of rst
order, representing a straight line) in a small interval around x
0
:
f(x) p
1
(x) = a
0
+ a
1
(x x
0
).
In order to nd the coecients a
0
, a
1
, one imposes the constraints p
1
(x
0
) = f(x
0
) and
p

1
(x
0
) = f

(x
0
), such that a
0
= f(x
0
) and a
1
= f

(x
0
). If one wants a better approximation,
one can choose to approximate f locally by a quadratic function p
2
(polynomial of second
order, representing an arc of parabola), which is better than a straight line. One writes
then
f(x) p
2
(x) = a
0
+ a
1
(x x
0
) + a
2
(x x
0
)
2
,
and imposes the additional constraint p

2
(x
0
) = f

(x
0
), such that f

(x
0
) = 2a
2
. If one
wishes to push further the precision of the approximation, one can take the third order
polynomial
f(x) p
3
(x) = a
0
+ a
1
(x x
0
) + a
2
(x x
0
)
2
+ a
3
(x x
0
)
3
,
and impose the additional constraint p

3
(x
0
) = f

(x
0
), leading to f

(x
0
) = 23a
3
, and
so on...
Going on like this nally leads to the Taylor expansion of the function f:
f(x) f(x
0
) + (x x
0
)f

(x
0
) +
1
2!
(x x
0
)
2
f

(x
0
) +
1
3!
(x x
0
)
3
f

(x
0
) + ,
where the dots represent higher powers in the dierence x x
0
, which are smaller and
smaller as the order of the Taylor expansion increases. Obviously, such an expansion is
valid only if the function is dierentiable a number of times large enough to reach the
desirable order.
Note that a polynomial function of order N is exactly equal to its Taylor expansion of
order N.
The power n of the rst neglected term in the expansion of a function around x
0
is denoted
O(x x
0
)
n
, and means terms which are at least of the power n.
5.2 Radius of convergence and series
For many functions, the Taylor expansion around x
0
can be pushed to an innite order, at
least in a vicinity of f(x
0
): if |x x
0
| < R, where R is the radius of convergence, then the
series is convergent and one can write
f(x) =

n=0
1
n!
f
(n)
(x
0
)(x x
0
)
n
,
22
where f
(n)
(x
0
) denotes the n-th derivative of f at x
0
.
Ratio convergence test Consider the geometric series S =

N1
n=0
q
n
. An expression for
this sum can be obtained by noting that qS = S 1 + q
N
, and hence
S = 1 + q + q
2
+ + q
N1
=
1 q
N
1 q
From this expression, we see that, if q < 1, then lim
N
S = 1/(1 q) is nite, and if
q 1, then S diverges when N .
More generally, for any series

n
a
n
, one can compare the ratio of two consecutive terms,
and conclude, from the behaviour of the geometric series, the following ratio convergence
test:
if lim
n
|a
n+1
/a
n
| < 1, the series is (absolutely) convergent;
if lim
n
|a
n+1
/a
n
| > 1, the series is divergent;
if lim
n
|a
n+1
/a
n
| = 1, one cannot conclude, and each case has to be looked at
individually.
The convergence of the Taylor series of a function f about x
0
therefore depends on |xx
0
|,
and the radius of convergence of the series is dened by
lim
n

f
(n+1)
(x
0
)
f
(n)
(x
0
)
R
n + 1

= 1. (10)
5.3 Examples
By calculating the dierent derivatives of the following functions at x = 0, one can easily
see that
cos x =

n=0
(1)
n
x
2n
(2n)!
, R = ;
sin x =

n=0
(1)
n
x
2n+1
(2n + 1)!
, R = ;
exp x =

n=0
x
n
n!
, R = ;
1
1 + x
=

n=0
(1)
n
x
n
, R = 1;
ln(1 + x) =

n=0
(1)
n
x
n+1
n + 1
, R = 1.
23
Counter example. Consider the function f(x) = exp(1/x). If we note y = 1/x > 0,
we have
f(0) = lim
y+
exp(y) = 0
f

(0) = lim
x0
1
x
2
exp(1/x) = lim
y+
y
2
exp(y) = 0
f

(0) = lim
x0
_
1
x
4

2
x
3
_
exp(1/x) = lim
y+
_
y
4
2y
3
_
exp(y) = 0
etc...
As a consequence,
f(0) + xf

(0) +
x
2
2!
f

(0) +
x
3
3!
f

(0) + = 0,
and no Taylor expansion of f can be dened around 0, whereas f(0) = 0 is dened.
5.4 Expansion for a composition of functions
Suppose that two functions f
1
, f
2
have the following expansions around x = 0, to the order
x
3
:
f
1
(x) = a
1
+ b
1
x + c
1
x
2
+ d
1
x
3
+O(x
4
)
f
2
(x) = a
2
+ b
2
x + c
2
x
2
+ d
2
x
3
+O(x
4
)
The expansion of the product f
1
f
2
can then be obtained up to the order 3 maximum, and
is
f
1
(x)f
2
(x) = a
1
a
2
+(a
1
b
2
+b
1
a
2
)x+(a
1
c
2
+b
1
b
2
+c
1
a
2
)x
2
+(a
1
d
2
+b
1
c
2
+c
1
b
2
+d
1
a
2
)x
3
+O(x
4
)
Example To calculate the expansion of tan x up to the order x
5
, we rst expand the
inverse of cos x to the order x
5
:
1
cos x
=
_
1
x
2
2
+
x
4
24
_
1
+O(x
6
)
= 1 +
x
2
2
+
5
24
x
4
+O(x
6
),
and then multiply by the expansion of sin x to the order x
5
:
tan x =
_
x
x
3
6
+
x
5
120
__
1 +
x
2
2
+
5
24
x
4
_
+O(x
7
)
= x +
x
3
3
+
2
15
x
5
+O(x
7
)
24
6 Vector calculus
6.1 Vectors
A vector u has a direction, given by the unit vector u, and a modulus |u|, and can be
written
u = |u| u.
n vectors u
1
, , u
n
are said to be linearly independent if
a
1
u
1
+ + a
n
u
n
= 0 a
1
= = a
n
= 0,
which means that these vectors point in dierent directions, and none of them can be
obtained by a linear combination of the others.
A vector space V of dimension d is set of vectors spanned by d independent vectors,
and is group for the addition. A set of basis vectors in V is made of d linearly independent
vectors i
1
, , i
d
, and any other vector can be decomposed onto this basis:
u = a
1
i
1
+ + a
d
i
d
,
where (a
,
, a
d
) are the coordinates of u in this basis. A change of basis leads to a change
of coordinates.
Addition of vectors Vectors can be added according to the rule (for example in three
dimensions)
u
1
+u
2
= (x
1
i + y
1
j + z
1
k) + (x
2
i + y
2
j + z
2
k)
= (x
1
+ x
2
)i + (y
1
+ y
2
)j + (z
1
+ z
2
)k.
Example The set of polynomials of order N is an (N + 1)-dimensional vector space.
Proof Consider the polynomials p
n
(x) = x
n
, n = 0, , N, and a set of constants c
n
such
that, for any x, we have c
0
p
0
(x) + c
1
p
1
(x) + +c
N
p
N
(x) = 0. Then we necessarily have
c
n
= 0, for all n, since a polynomial of degree N has at most N zeros. As a consequence,
the polynomials p
n
are linearly independent, and span an N-dimensional vector space,
where each vector can be written
P =
n=N

n=0
a
n
p
n
and a
n
are the coordinates of P on the basis {p
n
, n = 0, , N}.
6.2 Rotations in two dimensions
(i, j) form an orthonormal basis in a plane. After a rotation of angle , the basis has
changed to (i

, j

) where (see g.7)


i

= cos i + sin j
j

= sin i + cos j. (11)


25
i
j
j

Figure 7: Rotation of the unit vectors


From these relations, one can easily express the vectors i, j in the basis (i

, j

) by making
the inverse rotation ( ), which leads to
i = cos i

sin j

j = sin i

+ cos j

. (12)
The vector u = (a, b) is then transformed into the vector u

= (a

, b

) such that
u

= a i

+ b j

= (a cos b sin )i + (a sin + b cos )j = a

i + b

j,
and therefore
a

= a cos b sin
b

= a sin + b cos .
Equivalently, we also have
a = a

cos + b

sin
b = a

sin + b

cos .
Trigonometric formulas One way to nd the expression for sin( + ) and cos( + )
in terms of sin , cos , sin , cos is to perform two consecutive rotation, of angle and
respectively, and identify the result with a rotation of angle +. We have seen that a
rotation of angle of the basis vectors (i, j) gives
i

= cos i + sin j
j

= sin i + cos j
26
A second rotation, of angle , leads to
i

= cos i

+ sin j

= (cos cos sin sin )i + (cos sin + sin cos )j


j

= sin i

+ cos j

= (sin cos cos sin )i + (sin sin + cos cos )j.


This must be equivalent to
i

= cos( + ) i + sin( + ) j
j

= sin( + ) i + cos( + ) j,
such that
cos( + ) = cos cos sin sin
sin( + ) = sin cos + cos sin .
Dont learn this by heart, but rather remember how to get the result.
6.3 Scalar product
Let u and v be two vectors in a plane, with coordinates (a, b) and (c, d) respectively. From
these two vectors, one wishes to construct a quantity which is unchanged after a rotation
(= a scalar). The scalar product of u and v is dened as
u v = |u||v| cos(u, v),
and is indeed unchanged after a simultaneous rotation of both vectors u and v. One can
easily express the scalar product in terms of the coordinates of the vectors, by doing the
following. Lets denote by (a

, b

) and (c

, 0) the coordinates of u and v respectively, in


the orthonormal basis (i

, j

) where i

is along v. In the basis (i

, j

), the scalar product is


obviously given by u v = a

, with
a

= a cos b sin
b

= a sin + b cos
c

= c cos d sin
0 = c sin + d cos .
Together with
cos =
c

c
2
+ d
2
sin =
d

c
2
+ d
2
,
27
one easily obtains a

= ac + bd. The scalar product is then given by the expression


u v = ac + bd.
More generally, in d dimensions, the scalar product of u = (x
1
, ..., x
d
) and v = (y
1
, ..., y
d
)
is
u v =
d

i=1
x
i
y
i
.
Example Find the equation of the plane perpendicular to the vector u = (1, 2, 1), and
containing the point A of coordinates (3, 4, 2).
Any point M of coordinates (x, y, z) of this plane is such that AM u = 0, which reads
(x 3) + 2(y 4) + (z 2) = 0 or x + 2y + z = 13.
6.4 Cross product
One often needs to dene, from two vectors u, v, a third vector which is perpendicular to
u and v. The cross product u v is
u v = |u||v| sin(u, v)n,
where n is the unit vector perpendicular to the plane spanned by u, v, which denes the
anticlockwise direction. If (i, j, k) form the usual orthonormal basis, we have
i j = k
j k = i
k i = j.
From this, it is easy to nd the coordinates of the vector product of
u = (a
1
, a
2
, a
3
) times v = (b
1
, b
2
, b
3
), which are
u v =
_
_
a
2
b
3
a
3
b
2
a
3
b
1
a
1
b
3
a
1
b
2
a
2
b
1
_
_
.
Note that the cross product is a vector, unlike the scalar product which is a number. Finally,
the cross product is not commutative, since
u v = v u
6.5 Scalar triple product
If (u, v, w) are three vectors, one denes the scalar triple product by u (v w), and one
can check that a cyclic permutation does not change the result:
u (v w) = w (u v) = v (wu)
28

i
j
M
O
e
r

e
r
Figure 8: Polar coordinates (r, ) of the point M. The orientation of the basis vectors e
r
and e

depend on the position of M, such that e


r
is always along OM and e

is the image
of e
r
in a rotation of angle /2.
6.6 Polar coordinates
We denote O the origin of space and (i, j, k) the orthogonal and unit basis vectors of
Euclidean coordinates (x, y, z). Points in the plane (O, i, j) can also be labeled by the
polar coordinates (r, ) (see g.8), such that
r =
_
x
2
+ y
2
with 0 r <
tan =
y
x
with 0 < 2.
The orthogonal and unit basis vectors (e
r
, e

) in polar coordinates are dened by


e
r
= cos i + sin j
e

= sin i + cos j.
Note that
de
r
d
= e

de

d
= e
r
.
29
7 Complex numbers
7.1 Introduction
Complex numbers can be seen as two-dimensional vectors in the complex plane, spanned
by the basis (1, i), where i
2
= 1. In Cartesian coordinates, a complex number z can be
written
z = a 1 + b i = a + ib,
where a is the real part of z and b the imaginary part. The complex conjugate z

is then
dened as
z

= a ib.
Complex numbers can be added, or multiplied, to give a new complex number:
z
1
+ z
2
= (a
1
+ ib
1
) + (a
2
+ ib
2
) = a
1
+ a
2
+ i(b
1
+ b
2
)
z
1
z
2
= (a
1
+ ib
1
)(a
2
+ ib
2
) = a
1
a
2
b
1
b
2
+ i(a
1
b
2
+ a
2
b
1
).
This is because the set of complex numbers C is a group for both the addition and multi-
plication. Finally, the modulus of z is dened as
|z| = |z

| =

a
2
+ b
2
=

zz

.
7.2 Complex exponential
Complex numbers, seen as two-dimensional vectors, can be expressed using polar coordi-
nates (r, ):
z = r(cos + i sin ).
Using the series expansion for cosine and sine, we nd
z = r

n=0
_
(1)
n

2n
(2n)!
+ i(1)
n

2n+1
(2n + 1)!
_
= r

n=0
_
(i)
2n
(2n)!
+
(i)
2n+1
(2n + 1)!
_
= r

n=0
(i)
n
n!
= r exp(i).
r is the modulus of the complex z, and is its argument, and the last result leads to the
Eulers formula:
cos + i sin = exp(i). (13)
30
From this, it is easy to nd the de Moivres formula: noting that [exp(i)]
m
= exp(im),
where m is any integer, we have
(cos + i sin )
m
= cos(m) + i sin(m).
Example The number 1 has modulus 1 and argument (in the interval [0; 2[), and can
therefore be written
1 = e
i
This equation relates three fundamental numbers, which are 1, e, .
One also has i = e
i/2
, such that
i
i
= exp(i ln i) = exp(i i/2) = e
/2
0.208. (14)
Note that the logarithm of a complex number z is a multi-valued function: its denition
depends on the range of angles in which the argument of z is considered. Indeed, if
+ 2k, where k is an integer, z is invariant, but its logarithm changes as:
ln z ln z + 2ik.
As a result, i
i
as given in eq.(14) is the value when the argument of complex numbers are
dened in [0; 2[.
7.3 Trigonometric formula
From the Eulers formula (13), one can express cosine and sine with complex exponentials:
cos =
e
i
+ e
i
2
sin =
e
i
e
i
2i
,
and therefore one can also express the nth power of cosine and sine, in terms of cosine and
sine of n times the argument. For example:
(cos )
2
=
1
4
(e
2i
+ e
2i
+ 2) =
1
2
+
1
2
cos(2)
(sin )
3
=
i
8
(e
3i
e
3i
3e
i
+ 3e
i
)
=
3
4
sin
1
4
sin(3). (15)
These formulas are useful when one needs to integrate expressions involving powers of
cosine or sine. Do not learn these expressions by heart, but derive them whenever you need
them.
31
7.4 Roots of complex numbers
Consider the equation z
n
= A, where A is a given complex number and z is the unknown.
In order to solve this equation, one writes
A = exp(i)
z = r exp(i).
The equation to solve is then r
n
exp(in) = exp(i), which leads to, after identication
of the modulus and the argument of both sides of the equation z
n
= A,
r =
1/n
=
n

=

n
+
2k
n
,
where k = 0, 1, ..., n 1. Therefore a complex number has n roots of order n.
For example, the n
th
roots of the unity are
z
k
= exp
_
2i
k
n
_
k = 0, 1, ..., n 1.
7.5 Relation to hyperbolic functions
We have seen that a function can usually be expanded as a series of powers of the argument.
Since complex numbers can be multiplied and added, one can express a Taylor expansion
for a complex variable. It is therefore possible to understand a function of a complex
variable in terms of a series expansion. We give here two examples.
From eqs.(15), we have for any real x
sin(ix) = i sinh x
cos(ix) = cosh x,
which gives a formal way to dene trigonometric functions with complex arguments.
32
8 Linear dierential equations
A dierential equation gives a relation between a function f and its derivatives f

, f

, ....
This relation must be valid for any value of the argument x of f, which implies that f
must have a specic form.
8.1 First order, homogeneous
Let us consider the homogeneous equation
f

(x) = a(x)f(x), (16)


valid for any value of the argument x of f, and where a is a given function of x. Suppose
that f
1
is a solution of eq.(16), and suppose that f
2
is another solution. We have then
f

1
(x)
f
1
(x)
=
f

2
(x)
f
2
(x)
,
such that, after integration,
ln |f
2
(x)| = ln |f
1
(x)| + k
where k is a constant. Taking the exponential of this, one nds
f
2
(x) = cf
1
(x),
where c = exp(k), and therefore f
2
and f
1
are proportional: the set of solutions for the
equation (16) is a one-dimensional vector space.
For the equation (16), the solution can be derived by using the separation of variables
method, which consists in writing the equation in the form
df
f(x)
= a(x)dx,
which, after integration, leads to
ln
_
f(x)
f
0
_
=
_
x
x
0
a(u)du,
where f
0
= f(x
0
), such that
f(x) = f
0
exp
__
x
x
0
a(u)du
_
.
Example Consider the equation
f

(x) = af(x) + b,
where a, b are constants. If one denes g(x) = f(x) +b/a, one sees that g satises g

(x) =
ag(x) and one can use the previous result to nd
f(x) = g
0
exp(ax)
b
a
,
where g
0
= f(0) + b/a is a constant of integration.
33
8.2 Variation of parameters method
We consider now the non-homogeneous equation
f

(x) = a(x)f(x) + h(x), (17)


where h is a given function of x. If we suppose that f
1
is a specic solution of eq.(17), we
have
[f(x) f
1
(x)]

= a(x)[f(x) f
1
(x)],
such that the general solution of eq.(17) can be written
f(x) = c exp
__
x
x
0
a(u)du
_
+ f
1
(x),
where c = f(x
0
) f
1
(x
0
). In order to nd a specic solution f
1
, one can try
f
1
= (x) exp
__
x
x
0
a(u)du
_
,
where (x) is a function to be found. Plugging this ansatz into eq.(17), one nds

(x) = h(x) exp


_

_
x
x
0
a(u)du
_
,
which, after an integration, gives the function .
Example Consider the equation
f

(x) = af(x) + 2xe


ax
,
where a is a constant. The general solution of the homogeneous equation is Aexp(ax),
and the variation of parameters method consists in nding a specic solution of the form
(x) exp(ax), which leads to

(x) = 2x.
The general solution is therefore
f(x) = (A + x
2
) exp(ax).
8.3 Second order, homogeneous
We consider the following dierential equation
f

(x) + a(x)f

(x) + b(x)f(x) = 0, (18)


where a, b are functions of x. We will see with several examples that it is possible to nd
a least two linearly independent solutions f
1
, f
2
of eq.(18). Suppose that f
3
is a third
34
solution: we show now that, necessarily, f
3
is a linear combination of f
1
and f
2
.
Proof From eq.(18), we nd easily
f
i
(x)f

3
(x) f
3
(x)f

i
(x) + a(x)
_
f
i
(x)f

3
(x) f
3
f

i
(x)
_
= 0 i = 1, 2,
which, in terms of the Wronskians W
i
(x) = f
i
(x)f

3
(x) f
3
(x)f

i
(x), read
W

i
(x) + a(x)W
i
(x) = 0 i = 1, 2.
These equations can be integrated to give
W
i
(x) = A
i
exp
_

_
a(x)dx
_
i = 1, 2,
and we conclude that
A
1
_
f
2
(x)f

3
(x) f
3
(x)f

2
(x)
_
= A
2
_
f
1
(x)f

3
(x) f
3
(x)f

1
(x)
_
.
This equation can be written
f

3
(x)
f
3
(x)
=
A
1
f

2
(x) A
2
f

1
(x)
A
1
f
2
(x) A
2
f
1
(x)
,
and leads, after integrating and taking the exponential, to
f
3
(x) = C
1
f
1
(x) + C
2
f
2
(x),
where C
i
are constants. This shows that f
3
is necessarily in the vector space spanned by
f
1
and f
2
.
Example Consider the following dierential equation
f

(x) + 2af

(x) + bf(x) = 0, (19)


where a, b are constants. In order to nd two independent solutions of this equation, we
assume the following x-dependence
f(x) = exp(zx),
where z is a constant, which can be complex. This assumption leads to
z
2
+ 2az + b = 0,
which has the following solutions:
35
if a
2
> b:
z

= a k,
where k =

a
2
b. The general solution of the dierential equation (19) is then
f(x) = exp(ax)
_
Aexp(kx) + Bexp(kx)
_
= exp(ax)
_
C cosh(kx) + Dsinh(kx)
_
,
where C = A+ B and D = A B are constants.
if a
2
< b
z

= a ik,
and the general solution is
f(x) = exp(ax)Re
_

Aexp(ikx) +

Bexp(ikx)
_
= exp(ax)
_
Acos(kx) + Bsin(kx)
_
,
where

A,

B are complex constants, and A =Re{

A+

B}, B =Im{

B

A}. The latter
expression can also be written
f(x) = f
0
e
ax
cos(kx +
0
),
where f
0
=

A
2
+ B
2
and tan
0
= A/B.
if a
2
= b. In this case, z
+
= z

and the assumption f(x) = exp(zx) gives one solution


only, which is exp(ax). In order to nd a second linearly independent solutions of
the dierential equation (19), we assume the form
f(x) = xexp(wx),
where w is a constant, which leads to
2(w + a) + (w
2
+ 2aw + b)x = 0.
This equation must be valid for any x, such that necessarily
w + a = 0 and w
2
+ 2aw + b = 0,
for which the only solution is w = a. Finally, the general solution of the dierential
equation (19) is
f(x) = (A+ Bx) exp(ax),
where A, B are constants.
36
8.4 Second order, non-homogeneous
We consider now the equation
f

+ a(x)f

+ b(x)f = g(x), (20)


where g is a given function of x, and suppose that f
s
is a specic solution of the equation.
We have then
(f f
s
)

+ a(x)(f f
s
)

+ b(x)(f f
s
) = 0,
and the results derived for a homogeneous dierential equation hold for the dierence
f f
s
, such that the general solution of the equation (20) is
f(x) = Af
1
(x) + Bf
2
(x) + f
s
(x),
where A, B are constants, and f
1
, f
2
are linearly independent.
8.5 General properties
In general, the solution of a homogeneous linear dierential equation of order n is an n-
dimensional vector space, spanned by n linearly independent specic solutions. The n
constants of integration can then be seen as the coordinates of the solutions in the basis
of the n linearly independent specic solutions, and their values are given by n boundary
conditions.
8.6 Separation of variables method
We nally give an example of non-linear dierential equation, solved by the separation of
variables method. Consider the following equation,
f

(x) = x
3
f
2
(x).
This can also be written, when f(x) = 0,
df
f
2
= x
3
dx,
such that the left hand side has the variable f only and the right-hand side has the variable
x only. Both sides can then be integrated separately, which leads to

1
f
+
1
f
0
=
x
4
4
,
where f
0
= f(0), and the solution is nally
f(x) =
f
0
1 f
0
x
4
/4
37
9 Linear algebra
9.1 Linear function
A linear function l of a variable x satises, by denition,
l(ax + by) = al(x) + bl(y),
for any constants a, b and any variables x, y. If x is a number, the only possibility is
l(x) = kx, (21)
where k is a constant. We will now generalize this to linear functions applied to vectors.
9.2 Matrices
We have seen in section 6 that the rotation of angle of the vector of coordinates
u = (u
1
, u
2
) in the plane leads to the vector u

= (u

1
, u

2
) with
u

1
= u
1
cos u
2
sin
u

2
= u
1
sin + u
2
cos .
A rotation is linear, and in order to generalize eq.(21), we would like to write it in the form
u

= R u,
where R represents the rotation. This can be satised if R is a 22 array with components
R
ij
, with i, j = 1, 2 such that
R
11
= cos R
12
= sin R
21
= sin R
22
= cos ,
where i represents the line and j represents the row. We have then
u

1
= R
11
u
1
+ R
12
u
2
u

2
= R
21
u
1
+ R
22
u
2
,
which can be written
_
u

1
u

2
_
=
_
R
11
R
12
R
21
R
22
__
u
1
u
2
_
,
where the multiplication rule is
u

i
=
j=2

j=1
R
ij
u
j
.
More generally, any linear transformation of a n-dimensional vector u = (u
1
, ..., u
n
) can be
written in the form
u

i
=
j=n

j=1
M
ij
u
j
for i = 1, ..., n,
38
where M
i,j
are the components of a matrix M which represents the linear transformation.
Besides rotations, other linear transformations can be: projections, scalings, ... as well as
compositions of these.
A matrix S is said symmetric if S
ij
= S
ji
, and a matrix A is said antisymmetric if
A
ij
= A
ji
. The product of a symmetric matrix with an antisymmetric matrix is zero.
9.3 Determinants
Suppose one has the following system of equations
x

= ax + by
y

= cx + dy (22)
which can be written
_
x

_
= M
_
x
y
_
, with M =
_
a b
c d
_
.
One wishes to nd (x, y) in terms of (x

, y

), if possible, and therefore the inverse M


1
of
the linear transformation represented by M:
_
x
y
_
= M
1
_
x

_
.
The system of equations (22) is equivalent to
(ad bc)x = dx

by

(ad bc)y = ay

cx

, (23)
and leads to the following two cases
if adbc = 0, the previous set of equations is equivalent to dx

= by

, or ay

= cx

, such
that the two equations of the system (22) are equivalent. There is thus an innity
of solutions (x, y), corresponding to the straight line of equation ax + by = x

, or
equivalently cx+dy = y

. In this case, the matrix M has no inverse, since there is no


one-to-one relation between (x, y) and (x

, y

). A typical example of such a situation


is a projection on a given straight line, since all the points on a perpendicular straight
line are projected on the same point.
if ad bc = 0, there is one solution only to the system (23), which is
x =
dx

by

ad bc
y =
ay

cx

ad bc
. (24)
39
Therefore it is essential, in order to nd a unique solution to the system of equations (22),
and therefore to nd an inverse of the matrix M, that the determinant ad bc of M is not
zero.
det M = ad bc = 0,
or in other words: a linear function represented by the matrix Mhas an inverse, represented
by the matrix M
1
, if and only if det M= 0. From the solution (24), one can see that the
inverse of the matrix M is then
M
1
=
1
det M
_
d b
c a
_
More generally, a n n matrix has an inverse if and only if its determinant is not zero.
The expression for the determinant involves sums of products of n elements of the matrix.
9.4 Composition of linear functions
Given the two linear functions f
1
and f
2
, represented by the matrices M
1
and M
2
, with
M
1
=
_
a
1
b
1
c
1
d
1
_
M
2
=
_
a
2
b
2
c
2
d
2
_
,
we wish to represent the composition of functions
w = f
2
(v) = f
2
(f
1
(u)).
We have, with u = (x, y),
v = M
1
u =
_
v
1
v
2
_
=
_
a
1
x + b
1
y
c
1
x + d
1
y
_
,
and therefore
w = M
2
v =
_
w
1
w
2
_
=
_
(a
1
a
2
+ c
1
b
2
)x + (b
1
a
2
+ d
1
b
2
)y
(a
1
c
2
+ c
1
d
2
)x + (b
1
c
2
+ d
1
d
2
)y
_
.
This can also be written
w = M
2
M
1
u,
where the product of matrices M = M
2
M
1
is dened by
M
i,j
=

k=1,2
M
2 ik
M
1 kj
, i, j = 1, 2
such that
M =
_
a
1
a
2
+ c
1
b
2
b
1
a
2
+ d
1
b
2
a
1
c
2
+ c
1
d
2
b
1
c
2
+ d
1
d
2
_
40
Remark In general, the two operations do not commute: f
2
(f
1
(u)) = f
1
(f
2
(u)), and thus
M
2
M
1
= M
1
M
2
.
Determinant of a product The determinant of M = M
2
M
1
is
det M = (a
1
a
2
+ c
1
b
2
)(b
1
c
2
+ d
1
d
2
) (a
1
c
2
+ c
1
d
2
)(b
1
a
2
+ d
1
b
2
)
= (a
1
d
1
b
1
c
1
)(a
2
d
2
b
2
c
2
),
such that
det (M
2
M
1
) = det M
2
det M
1
= det (M
1
M
2
)
The previous properties are also valid for n n matrices, and the determinant of a matrix
is also noted
det
_
_
_
a
11
a
1n
.
.
.
a
n1
a
nn
_
_
_
=

a
11
a
1n
.
.
.
a
n1
a
nn

9.5 Eigenvectors and eigenvalues


Given a matrix M, an eigenvector e of M satises, by denition,
M e = e, with e = 0 (25)
where the real number is the eigenvalue of M corresponding to e. Therefore the eect
of the matrix M on its eigenvector e is simply a rescaling, without change of direction.
A n n matrix, operating on a n-dimensional vector space, can have at most n linearly
independent eigenvectors. In this case, these vectors can constitute a basis (e
1
, ..., e
n
), and
the corresponding matrix, in this basis, is diagonal, with the eigenvalues being its diagonal
elements:
=
_
_
_
_
_
_
_

1
0 0 0
0
2
0 0
.
.
.
0 0
n1
0
0 0
n
_
_
_
_
_
_
_
In this case, the determinant is simply the product of the eigenvalues
det =
1

2

n
In order to nd the eigenvalues of a matrix, the rst step is to write the system of
equations (25) in the following way:
[M1] e = 0,
41
where 1 is the unit matrix. If the corresponding matrix M1 had an inverse, the only
solution to this system of equations would be e = 0. But if the initial matrix M has
eigenvectors, these are not zero, and as a consequence M 1 has no inverse. Therefore
its determinant vanishes:
det [M1] = 0.
This determinant is polynomial in , and the solutions to this equation give the eigenvalues
which are expected.
Example For a 2 2 matrix, we have

a b
c d

= (a )(d ) bc = 0,
such that the eigenvalues , if there are, satisfy a quadratic equation.
42
10 Functions of several variables
10.1 Partial dierentiation
If f is a function of two variables, and associates the value z = f(x, y) to the pair (x, y),
one can dene the partial derivative of f with respect to x, for a xed value y, and the
partial derivative of f with respect to y, for a xed value x. These partial derivatives are
denoted
f
x
= lim
x0
f(x + x, y) f(x, y)
x
f
y
= lim
y0
f(x, y + y) f(x, y)
y
.
An important property of partial derivatives concerns their commutativity:

2
f
xy
=

2
f
yx
.
Proof From their denition, the partial derivatives satisfy

2
f
yx
= lim
y0
1
y
_
f
x
(x, y + y)
f
x
(x, y)
_
= lim
y0
lim
x0
1
yx
[f(x + x, y + y) f(x, y + y) f(x + x, y) + f(x, y)]
= lim
x0
1
x
_
f
y
(x + x, y)
f
y
(x, y)
_
=

2
f
xy
.
Example For the function f(x, y) = x
n
cos(ay), where n and a are constants, we have
f
x
= nx
n1
cos(ay)
f
y
= ax
n
sin(ay),
and of course

2
f
yx
= anx
n1
sin(ay) =

2
f
xy
.
Nabla operator One denes the dierential operator as the symbolic vector of com-
ponents
=
_

x
,

y
,

z
_
,
43
which has to be understood as an operator applied to a scalar quantity or a vector E
depending on the coordinates x, y, z:
(x, y, z) =
_

x
,

y
,

z
_
E(x, y, z) =
E
x
x
+
E
y
y
+
E
z
z
E(x, y, z) =
_
E
z
y

E
y
z
,
E
z
x

E
z
x
,
E
y
x

E
y
x
_
10.2 Dierential of a function of several variables
We consider here the example of two variables x, y, and formally use the notations dx and
dy for the innitesimal limits of the increments x and y.
If f depends on two variables x, y, the change in f(x, y) has two contributions: one from
the cange x and one from the change y. A Taylor expansion on both variabls leads to
f(x + x, y + y) = f(x, y + y) + x
f
x
(x, y + y) +O(x)
2
= f(x, y) + y
f
y
(x, y) + x
f
x
(x, y) +O
_
(x)
2
, (y)
2
, xy
_
such that, if we note f = f(x + x, y + y) f(x, y), we have
f = x
f
x
+ y
f
y
+ , (26)
where dots represent higher orders in x and y. In the limit where x 0 and y 0,
we obtain the denition of the dierential
df =
f
x
dx +
f
y
dy, (27)
which is an exact identiy, and can be interpreted as a vector in a two dimensional vector
space spanned by dx and dy, with coordinates f/x and f/y.
Remark It is important to distinguish the symbols for partial and total derivatives. In-
deed, in eq.(27), if y is a function of x, one can consider the function F(x) = f(x, y(x)),
which, using the chain rule, has the following derivative
F

(x) =
df
dx
=
f
x
+
dy
dx
f
y
=
f
x
+ y

(x)
f
y
.
44
As a consequence,
f
x
=
df
dx
Finally, if a function depends on N variables, one can dene the partial derivatives with
respect to any of these variables, and these partial derivatives will commute among each
other.
10.3 Implicit functions
If the variables x, y, z are related by an equation of the form g(x, y, z) = 0, where g is a
dierentiable function, on can dene each variable as a function of the other two (besides
possible singular points). We can show then that
_
x
y
_
z
=
_
y
x
_
1
z
,
where the variable in subscript represents the one kept constant in the dierentiation.
Proof Since g(x, y, z) = 0, we have
dg =
g
x
dx +
g
y
dy +
g
z
dz = 0,
and if we consider the case z= constant, we have dz = 0, such that
_
x
y
_
z
=
dx
dy

dz=0
=
g
y
_
g
x
=
_
dy
dx

dz=0
_
1
=
_
y
x
_
1
z
Another important property is
_
x
y
_
z
_
y
z
_
x
_
z
x
_
y
= 1
Proof We have
_
x
y
_
z
=
g
y
_
g
x
_
y
z
_
x
=
g
z
_
g
y
_
z
x
_
y
=
g
x
_
g
z
,
such that the product of these three derivatives is -1.
10.4 Double integration
If f is a function depending on two variables x, y, one can dene the function F(x) as
F(x) =
_
d
c
f(x, y)dy,
45
and then the integral of F over an interval [a, b]
I
1
=
_
b
a
F(x)dx =
_
b
a
__
d
c
f(x, y)dy
_
dx =
_
b
a
dx
_
d
c
dy f(x, y).
The product dxdy represents an innitesimal surface are in the plane (0, x, y), and the
integral is thus the volume between the rectangular area of surface |b a||d c| and the
surface dened by z = f(x, y).
In the simple case where f is a product f(x, y) = (x)(y), the latter integral is just a
product of integrals
I
1
=
_
b
a
dx
_
d
c
dy (x)(y) =
__
b
a
(x)dx
_

__
d
c
(y)dy
_
.
More generally, one can dene a double integral over any area D which is not rectangular
by
I
2
=
_ _
D
f(x, y)dxdy
In this case, one can perform the integrals in whichever order: rst over x and then over
y or the opposite:
I
2
=
_
x
2
x
1
_
_
y
2
(x)
y
1
(x)
f(x, y)dy
_
dx,
where the values y
1
(x), y
2
(x) are the boundaries of the domain D for a given value of x, or
I
2
=
_
y
2
y
1
_
_
x
2
(y)
x
1
(y)
f(x, y)dx
_
dy,
where the values x
1
(y), x
2
(y) are the boundaries of the domain D for a given value of y.
Example Calculate the volume of a pyramid whose base is an equilateral triangle of sides
a , and the three other faces, of equal surface area, have edges which meet orthogonally.
For this problem, lets consider the top of the pyramid at the centre of coordinates, such
that the axises (Ox), (Oy), (Oz) are along the edges which meet orthogonally. The base is
then perpendicular to the vector (1, 1, 1), and intersects the previous edges at the distance
a/

2 from the top. Its equation is thus x + y + z = a/

2. The volume is then


V
1
=
_
a/

2
0
dy
_
a/

2y
0
_
a

2
x y
_
=
_
a/

2
0
dy
1
2
_
a

2
y
_
2
=
a
3
12

2
. (28)
46
dr
rd
Figure 9: The innitesimal surface area in polar coordinates is rdrd
Double integral in polar coordinates The innitesimal surface area in polar coordi-
nates is dr rd (see g(9)), and an integral over a domain D is thus
J =
_ _
D
f(r, )rdrd.
Example Calculate the volume of half a solid ball of radius R.
A sphere of radius R, centered on the origin, is given by the equation x
2
+ y
2
+ z
2
= R
2
.
This volume of half the ball is then
V
2
=
_ _
C
z(x, y)dxdy =
_ _
C
_
R
2
x
2
y
2
dxdy,
where C is the disc of radius R, centered in the origin. A change of variables to polar
coordinates gives
V
2
=
_
R
0
rdr
_
2
0
d

R
2
r
2
= 2
_
R
0
rdr

R
2
r
2
= 2
_

1
3
(R
2
r
2
)
3/2
_
R
0
=
2
3
R
3
.
47
dr
rd


r sin d
x
y
z
Figure 10: The innitesimal volume in spherical coordinates is r
2
dr sin dd
10.5 Triple integration
Triple integration is a straightforward generalization of double integration, and it can
sometimes be useful to use spherical coordinates (r, , ), if the function to integrate is
expressed in terms of spherically symmetric quantities. Using these coordinates, the in-
nitesimal volume is rd r sin ddr = r
2
dr sin dd (see g.10), and an integral over
a three-dimensional domain is then
_ _ _
D
r
2
dr sin dd f(r, , ).
Example The volume of half a solid ball of radius R, which was calculated before, is easier
to calculate using spherical coordinates, and is
V
2
=
_
R
0
r
2
dr
_
/2
0
d sin
_
2
0
d
=
R
3
3
[cos ]
/2
0
2
=
2
3
R
3
48

Potrebbero piacerti anche