Sei sulla pagina 1di 24

Useful Mathematical Techniques for Physicists

Moataz H. Emam

Department of Physics
SUNY College at Cortland
Cortland, New York 13045, USA

Abstract

In this ever-expanding document, I will list and very briefly explain various mathematical
techniques and formulae that are particularly useful to physicists. Students in 300 level physics
classes and above should familiarize themselves with these, as they sometimes tend to pop up
in the middle of calculations; hence getting to know them ahead of time is quite useful and
can make your grasp of advanced material run more smoothly. Links to more detailed online
discussions will be provided where appropriate. The material is presented in no particular order,
however some topics may refer to earlier ones. I hope that this document is useful to students
as a short study guide as well as a quick reference.

moataz.emam@cortland.edu. This document was last updated: May 8, 2017


Contents

1 Dimensional analysis 3

2 Newtons binomial series expansion 6

3 Taylors series expansion 7

4 Eulers formula and trigonometric functions 8

5 Hyperbolic trigonometric functions 9

6 Topics in linear algebra 10


6.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
6.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

7 Two particularly useful differential equations 15

8 Solution of partial differential equations 16

9 Advanced topics in multi-variable calculus 18


9.1 Greens theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
9.2 Stokes theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
9.3 Gauss divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9.4 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
9.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2
1 Dimensional analysis

A dimension is a measure of a physical variable (without numerical values)1 , while a unit is a


way to assign a number to that dimension. Specifically there are three fundamental dimensions
in science: Length, which we abbreviate with L, Mass, abbreviated M , and Time T . Units
on the other hand, are different ways to assign numbers to these dimensions, so we say that the
meter is a unit of the length dimension, the kilogram is a unit of mass and the second is a unit of
time. There are of course many different types of units that the student should by now be familiar
with, most common in physics is the SI system (Le Systeme International dUnites, more commonly
known as metric).
L, M , and T are known as the fundamental dimensions because they are the only ones that
cannot be written in terms of simpler dimensions. They are in fact the ones other dimensions are
made up of. As an example, consider Newtons second law

F = ma. (1)

The dimensions of the left hand side of the equation must match the dimensions of the right
hand side. So since the dimension of the mass m is M , and the dimensions of the acceleration are
L/T 2 , we must conclude that the dimensions of force are M L/T 2 . We use square brackets [ ] to
denote dimensions, so for this example we can write

L
[F ] = [ma] = M , (2)
T2

which reads: The dimensions of force are equivalent to the dimensions of mass times acceleration
and are equal to M L/T 2 . In SI units the unit of force is called the Newton, and abbreviated by
N . Hence, using (2) we conclude that N is equivalent to kg m/s2 . Similarly, the unit of energy
can be deduced as follows: Think of any equation that involves energy, for instance E = mc2 . The
dimensional analysis of this is
L2
[E] = [mc2 ] = M (3)
T2
hence the SI unit of energy, which is known as the Joule J, is equivalent to kg m2 /s2 .
Dimensional analysis is extremely useful. It can even be used to deduce the general form of
new laws of physics. Here is an example that I sometimes use in the very first lecture of the very
first introductory physics class:
1
Not to be confused with the dimensions of space or spacetime.

3
Calculational example:
A simple pendulum is a mechanism where a rigid massless rod of length b is frictionlessly hinged
on one end, has a mass m attached to the other end and is allowed to swing freely in vacuum under
the influence of gravity. As a pendulum swings we notice that it completes whole cycles (i.e. swings
back to its starting location then repeats) over a specific period of time known as the period of the
pendulum P . We argue on physical grounds that the period can depend on only three quantities:
the length of the pendulum b, the mass m, and the acceleration due to gravity g. There simply are
no other quantities in the setup that it could depend on. Using dimensional analysis, estimate how
the period P depends (or not) on these three quantities.

We solve this problem by arguing that if P depends on a certain combination of b, m, and g,


then we can write
P bx my g z (4)

where x, y, z are unknown powers that we need to find. Clearly the values of these powers must
be such that (4) is balanced. So a random choice such as x = 2, y = 3, z = 1 leads to:
b2 m3
P (5)
g
which cannot be the correct formula because the dimensions of the left hand side do not match
those of the right:
L2 M 3
T 6= (6)
L/T 2
Rather than randomly trying to find the correct values of x, y, z by trial and error, we do it
algebraically as follows:

[P ] [b]x [m]y [g]z


 z
L
T Lx M y
T2
T 1 Lx+z M y T 2z (7)

4
Matching the powers of the left and right sides of this equation we conclude that:

x+z = 0

y = 0

2z = 1 (8)

hence x = 1/2, y = 0, and z = 1/2. Our estimate of how P is related to b, m, g is then:


s
b
P . (9)
g

This result is the best we can do using only dimensional analysis, and it is in fact correct2 .
However, note that this is a proportionality estimate, we cannot change (9) into an equality. The
reason is that once we analyze the problem more precisely using the methods of classical mechanics
we find that the exact answer is3 : s
b
P = 2 . (10)
g
The factor 2 could not have been found using dimensional analysis alone since it is a dimen-
sionless quantity4 , so although dimensional analysis is clearly a very useful technique to use, it
can only estimate relationships for up to dimensionless numbers. Actual physics methods are still
needed to figure out the details. It is however a useful starting point and is often used in theoretical
physics. More on dimensions and units may be found here.
2
An important conclusion of this analysis is that the period apparently does not, and in fact cannot, depend on
the mass m. In other words pendula with different masses will oscillate over the same period if they have the same
length, and of course if they have the same acceleration due to gravity.
3
For the case of a pendulum oscillating over a very small angle.
4
Radians is not a dimension.

5
2 Newtons binomial series expansion

This is the statement that:

n n (n 1) 2 n (n 1) (n 2) 3
(1 + x)n = 1 + x+ x + x +
1! 2! 3!
1 1
= 1 + nx + n (n 1) x2 + n (n 1) (n 2) x3 + (11)
2 6

Note that for integer values of n, the series will eventually terminate. For example:

(1 + x)2 = 1 + 2x + x2

(1 + x)3 = 1 + 3x + 3x2 + x3 (12)

and so on. For non-integer real values of n, the series never terminates, however it is an important
approximation tool in physics when the quantity x happens to be very small x  1. In this case,
the higher order terms x2 , x3 , x4 and so on get smaller and smaller and can hence be ignored:

(1 + x)n 1 + nx, for x  1. (13)

As an example consider n = 1/2, i.e. a square root:

1
1 + x 1 + x, for x1 (14)
2

or even
1 1
1 x, for x  1. (15)
1+x 2
More detail may be found here.

6
3 Taylors series expansion

This is the statement that any function f (x) can be expanded around a specific point x = a as an
infinite polynomial with the derivatives of the function evaluated at the given point as coefficients,
thus:
1 00 1
f (x) = f (a) + f 0 (a) (x a) + f (a) (x a)2 + f 000 (a) (x a)3 + (16)
2! 3!
where a prime denotes the derivative of f with respect to x. So f 00 (a) is the second derivative of f
followed by the substitution x = a. A particularly useful form of the series is the expansion around
a given point plus a small change:

1 00 1
f (a + ) = f (a) + f 0 (a)  + f (a) 2 + f 000 (a) 3 + (17)
2! 3!

If the quantity  is a small value   1 (in which case it is called a perturbation), then one
may drop higher order terms to write:

f (a + ) f (a) + f 0 (a)  (18)

or in some cases include one more term

1 00
f (a + ) f (a) + f 0 (a)  + f (a) 2 (19)
2!

when useful. More detail maybe found here.

7
4 Eulers formula and trigonometric functions

There are many formulae in Mathematics called Eulers Formula. The one we mean here is:

ei = cos i sin (20)


where i = 1 and is any given angle. Useful consequences include:

1  +i 
cos = e + ei
2
1  +i 
sin = e ei . (21)
2i

One can also use this to show that:

cos2 + sin2 = 1, (22)

which pops up all the time in the unlikeliest of places. There are many useful trigonometric
identities that can be proven using Eulers formula, way too numerous to list here, such as:

sin ( ) = sin cos cos sin

cos ( ) = cos cos sin sin . (23)

Many more may be found, for example here.

8
5 Hyperbolic trigonometric functions

These are functions used in the study of triangles on hyperbolic surfaces, akin to the study of
triangles on plane surfaces that leads to ordinary trigonometry. They may be thought of as the
trigonometric functions related to hyperbolae in pretty much the same way the ordinary trigono-
metric functions relate to circles. The most straightforward way to define them is via an Euler-type
formula similar to (20):
e = cosh sinh (24)

where the dimensionless positive parameter ranges from zero to infinity. Equation (24) leads
to the definition of the hyperbolic sine (pronounced shine or sinsh) and the hyperbolic cosine
(pronounced cosh):

1 
cosh = e + e
2
1 
sinh = e e (25)
2

Clearly one can easily relate the ordinary trigonometric functions to the hyperbolic ones by the
introduction of i. For example, a moments deliberation will convince the reader of the validity of
the following relations:

cosh = cos i

sinh = i sin i

cosh i = cos

sinh i = i sin (26)

and many others, including the analogue to (22):

cosh2 sinh2 = 1 (27)

See here for more.

9
6 Topics in linear algebra

6.1 Matrices

A matrix (plural matrices) is a rectangular array of numbers, symbols, or expressions, arranged in


a specific number of n rows and p columns, together known as the rank of the matrix n p. For
example, the rank of the matrix [M ] below is 2 3 (read two by three), because there are two
rows and three columns:
1 0 2
[M ] = (28)
1
i4 sin 3

Textbooks use different notations to express matrices, here we use the square brackets [ ] (Not
to be confused with the use of square brackets to denote dimensions in section 1). The components
of a matrix can be denoted by indices which we attach to the symbol M . So we say for example
that the components of [M ] are Mij where the index i counts the rows (i = 1, 2) and the index j
counts the columns (i = 1, 2, 3) such that M11 = 1, M12 = 0, M13 = 2, M21 = i4, M22 = sin ,
and M23 = 1/3.
In physics, matrices represent physical quantities, for example it is possible to represent ordinary
three dimensional vectors by a matrix of components that has only three entries ordered as either a
3 1, also called a column matrix, or 1 3; a row matrix. For example, a vector V = 9i+ 2j k

can be written in either of the following forms:



9
h i
[V ] = 2 or [V ] = 9 2 1 (29)


1

Matrices can be added by adding their individual components if and only if they are of the
same order. So one can add two 3 2 matrices as follows:

[M ] + [N ] = [K] (30)

1 0 1 3 2 3

9 2 + 9 5 = 0 7 (31)



1 4 3 2 2 6
Symbolically we may also write

Mij + Nij = Kij , where i = 1, 2, 3 j = 1, 2. (32)

10
We can multiply scalar quantities with matrices by multiplying the scalar with each component
of the matrix. Hence
3[M ] = [S] (33)

1 0 3 0

3 9 2 = 27 6 (34)



1 4 3 12
or 3Mij = Sij . Matrices can be multiplied together using the following rule: Multiply and add the
components of the rows of the matrix on the left with the columns of the matrix on the right. This
is possible if and only if the number of columns of the left matrix is the same as the number of
rows of the right matrix. As an example one can perform the calculation

1 3
1 0 2
9 (35)

5
1
2 5 3
3 2

but not this one


1 3
1 0 2
9 5


(36)

2 5 3
3 2


1 0 9
5 4
If [M ] is an m n matrix and [N ] is an n p matrix, then their matrix product [M ][N ] is
the m p matrix whose entries are given by the dot product of the corresponding row of [M ] and
the corresponding column of [N ]. An example would be a 2 3 matrix [M ] multiplied by a 3 2
matrix [N ] yielding a 2 2 matrix [K]:

[M ][N ] = [K]

1 3
1 0 2 (1 1) + (0 9) + (2 3) (1 3) + (0 5) + (2 2)
9 5 =

1

1
 1

2 5 3
(2 1) + (5 9) + 3 3 (2 3) + (5 5) + 3 2
3 2

5 1
= (37)
42 31.6

Using the index notation, (37) is equivalent to:


3
X
Kij = Mir Nrj , (38)
r=1

11
as the reader is encouraged to verify. Certain properties of matrices are particularly useful in
physics. One of them is finding the so-called transpose of a given matrix, which is simply the
process of reversing the rows and columns. So a matrix [M ] of rank 2 3 has a transpose of rank
3 2 which can be found by replacing the rows with the columns as follows:

1 0 2
If [M ] =
2 5 31

1 2

Then [M ]T = 0 5 (39)


2 13

where the letter T stands for transpose. Note that the transpose of a row matrix is a column matrix
and vise versa. So the example of the vector in (29) may be rewritten:

9
h i
[V ] = 2 , [V ]T = 9 2 1 (40)


1

or the other way around. Now, in the standard physics curriculum, two types of matrix ranks
frequently arise: The column and row matrices used to describe vectors (ranks 3 1 and 1 3
respectively), and square matrixes that have exactly the same number of rows and columns,
usually 3. For the purpose of physical applications, we are interested in multiplying vectors with
square matrices. Here is an example, consider the two matrices:

9 3 2 0

[V ] = 2 and [A] = 1 1 5 (41)


1 3 1 2

then we can multiply [A][V ] in that order



3 2 0 9 31

[A] [V ] = 1 1 5 2 = 2 (42)


3 1 2 1 27

but we cannot do [V ][A] unless we transpose [V ] first:


h i h i
[V ]T [A] = 9 2 1 3 2 0 = 26 15 8

1 1 5 (43)


3 1 2

12
Notice that matrix multiplication is not commutative. If for example two matrices [A] and [B]
are square and of the same rank then we can certainly multiply them in either order [A][B] or
[B][A] but we will not necessarily get the same result, i.e. generally [A][B] 6= [B][A].

6.2 Determinants

The determinant, also known as the determinant of a matrix, is an important property of only
square matrices. Determinants are needed in many applications of linear algebra and arise all the
time in physics. In fact, as early as your very first course in mechanics you may have encountered
determinants as a way of performing the cross product of vectors. Determinants of matrices are
not themselves matrices, but are just scalars (technically they are matrices of rank 1 1). Here is
how to calculate determinants:
Determinants of square matrices of rank 2 2:
Given the matrix
a b
[A] = (44)
c d
its determinant is defined by (note the notation used):

a b

|A| = det [A] =
= ad bc.
(45)
c d

Determinants of square matrices of rank 3 3:


Given the matrix
a b c

[B] = d e f (46)



g h k
To find its determinant first insert a - sign next to the number in the top row middle column,
in other words replace b b:

a b c

Intermediate step = d e f (47)


g h k

Now take each number of the top row, multiply it by the 2 2 determinant of the numbers that

13
dont share the same column with it and add them, hence:

|B| = det[B]

e f d f d e


= a b + c

h k g k g h
= a (ek f h) b (dk f g) + c (dh eg) (48)

Lets do one more to set the pattern.


Determinants of square matrices of rank 4 4:
Given
a b c d

e f g h


[C] =


(49)
k l m n

o p q r
The intermediate step in this case is replacing every other term of the top row with its negative,
so b b and d d:

a b c d

e f g h

Intermediate step = [C] =


(50)
k l m n

o p q r

then we find the determinant:

|C| = det [C]



f g h e g h e f h e f g



= a l m n b k m n + c k l n d k l m (51)



p q r o q r o p r o p q
(52)

where the rest should be self explanatory at this point. More on matrices and determinants here.

14
7 Two particularly useful differential equations

A differential equation is a relationship between an unknown function y (x) and its own derivatives
y 0 (x), y 00 (x), and so on. They are generally classified as either linear or non-linear differential
equations. Both types arise naturally in physics; albeit very rarely with derivatives higher than the
second. Two equations that arise in physics quite a lot and should be recognized by students on
sight 5 are:

y 00 + 2 y = 0 (53)

y 00 2 y = 0 (54)

where 2 is a real constant, chosen to be squared so the solutions wouldnt contain a square
root (it just looks better). Equation (53) is referred to (by physicists) as the equation of Simple
Harmonic Motion, as it arises in the context of describing the oscillatory motion of pendula and
other oscillating systems. The second equation (54) arises in cases where objects tend to spiral
away from or towards a specific point. The solutions to both equations can be written down in
terms of exponentials, and consequently, via Eulers formulae (20 and 24), can also be described in
terms of trigonometric or hyperbolic trigonometric functions. hence:

y 00 + 2 y = 0 has the equivalent solutions :

y (x) = A sin (x) + B cos (x) or

= C sin (x + ) or

= D cos (x + ) or

= Eeix + F eix (55)

where A, B, C, D, E, F , , and are all arbitrary integration constants. Similarly

y 00 2 y = 0 has the equivalent solutions :

y (x) = A sinh (x) + B cosh (x) or

= C sinh (x + ) or

= D cosh (x + ) or

= Eex + F ex (56)
5
In my upper level classes I demand that students train themselves to recognize them, even if they look different.
For example x00 x = 0, 00 m = 0, and so on.

15
8 Solution of partial differential equations

A partial differential equation is an equation that relates a multi-variable function y (x, t) to its
partial derivatives with respect to x and t. A particularly useful method of solving such equations is
known as the method of separation of variables. For simplicity, I will consider here a function
of two variables only, but the method can be easily generalized to more:

2y 2y
+ 2 = 0. (57)
x2 t

What is the two variable function y (x, t) that satisfies (57)? One way to solve such a problem
is to break down (57) into two separate ordinary (i.e. not partial) differential equations each in
one of the variables x and t. One can do so by assuming that the solution, whatever it is, can be
written in terms of the product of two distinct single variable functions as follows:

y (x, t) = X (x) T (t) (58)

This assumption by itself will separate (57) into the two desired equations as follows: Plug (58)
into (57):
2 2
X (x) T (t) + X (x) T (t) = 0, (59)
x2 t2
then pull out the functions X and T from the derivatives:

d2 X d2 T
T + X = 0. (60)
dx2 dt2

Note that we suddenly changed the partial derivative into an ordinary derivative d since X
and T are single variable functions and we no longer need the partial derivative. Now divide (60)
through by XT to end up with:
1 d2 X 1 d2 T
+ = 0. (61)
X dx2 T dt2
We have successfully separated the variables, since now the first term of (61) is dependant on
x only and the second on t only (hence the name of the method). Now comes the big step: We
declare that two functions in two different variables can only cancel out if and only if both functions
are equal to a constant! In other words, if we have

f (x) + g (t) = 0 (62)

then it must be true that f (x) = b and g (t) = b, where b is a constant, otherwise they can
never cancel, because of they werent constant then as f changes independently from g their values

16
cannot always cancel. Based on this we conclude that (61) leads to

1 d2 X
= k2
X dx2
1 d2 T
= k 2 (63)
T dt2

where we have chosen the constant to be k 2 rather than k for future simplicity. Now rearranging
(63) gives:

d2 X
k2 X = 0
dx2
d2 T
+ k 2 T = 0. (64)
dt2

Now where have we seen these before? Wait, these are exactly the equations (54) and (53)
respectively! But we already know how to solve these (recall that physics students are required to
recognize them on sight). Hence

X (x) = Aekx + Bekx

T (t) = Ceikt + Deikt (65)

or any of the other possibilities (55) and (56). And remembering that the function we are really
trying to find is y (x, t) = X (x) T (t), we write the solution of (57) as:
  
y (x, t) = Aekx + Bekx Ceikt + Deikt . (66)

The reader is urged to plug (66) in (57) to verify that we have indeed found the correct solution6 .
Finally, note that we could have chosen

1 d2 X
= k2
X dx2
1 d2 T
= k 2 (67)
T dt2

instead of (63). This would reverse the solution to become y (x, t) = Aeikx + Beikx Cekt + Dekt .
 

The decision of which solution to pick is usually a matter to be decided based on the physics of the
problem. Mathematically either choice is acceptable.
6
It should now be clear why we chose the constant in (63) to be k2 . Had we chosen it to be just k, then the solution
  
(66) would have depended on k instead, i.e. it would have been y (x, t) = Ae kx + Be kx Cei kt + Dei kt ,
which is just ugly.

17
9 Advanced topics in multi-variable calculus

The following theorems should normally be studied towards the end of the standard calculus se-
quence. However, it is likely that a one semester course in multi-variable calculus is not enough
time to get to them, despite the fact that they are essential in understanding certain standard
physics topics; most notably Maxwells theory of electricity and magnetism. We introduce these
theorems here without proof. It is important that the reader at least get the general gist of them
before delving into upper level physics courses. We will also discuss the pattern that seems to
exist in all of them, pointing towards a deeper and richer structure.

9.1 Greens theorem

Discovered by the mathematician George Green in the early nineteenth century, the theorem known
by this name is an integral relationship between double integrals over areas and contour integrals
over lines. Consider a flat two dimensional surface D, bounded by a closed contour D, where the
symbol here means the line that completely surrounds a given surface, as shown in the figure.

D
D

The contour D is given an orientation, in other words, performing a line integral on this
contour follows a certain route. It is conventional to choose the standard route as counterclockwise
as shown. Doing line integrals in the clockwise direction gives the same result but with a minus
sign. Now consider a vector field F (x, y) whose components are

F (x, y) = P (x, y) i + Q (x, y) j. (68)

Greens theorem states that the contour integral of F around D is always equal to the area
integral shown:
I Z Z  
Q P
F dr = da, (69)
x y
D D

18
where da is an element of area on D. Following the rules of contour integration, the left hand side
of this formula is just (in Cartesian coordinates):
I I
F dr = (P dx + Qdy) (70)
D D

The formula looks arbitrary, and a physical interpretation does not seem obvious. Suffice it to
note for now that it provides for a relationship between a line integral of a given vector field and
a surface integral of the derivative of said vector field. This is a special case of the next theorem.

9.2 Stokes theorem

Greens theorem related two integrals in a plane, now generalize the surface D such that it is now
any two dimensional surface, flat or not, as shown in the figure.
y

D x

For example, D may now be a hemispherical surface bounded by the circle D at its base.
Now define a unit vector n
that is orthogonal to D at any point, then for an arbitrary vector field
F (x, y, z), the following contour vs surface integration relationship is always true:
I Z Z
F dr = ( F) n da. (71)
D D

This is Stokes theorem, clearly a generalization of Greens. Once again it provides for a
relationship between a line integral of a given vector field and a surface integral of the derivative of
said vector field. It can be shown that if D is made flat and living in the x y plane, then Greens
theorem is retrieved, as one would expect.

19
9.3 Gauss divergence theorem

Consider now a volume D (not a surface as in the previous cases), surrounded by a surface D
(not a contour any more), and an arbitrary vector field F (x, y, z).

D da

If Green and Stokes provided relationships between a surface and its boundary, can we find a
relationship between a volume and its boundary (are you starting to see the pattern)? It turns out
that such a relation exists, and we also emphasize that just like its previous counterparts, it relates
the vector on the boundary D with the derivative of the vector in D. This is Gauss divergence
theorem:
Z Z Z Z Z
Fn
da = FdV , (72)
D

where dV is an element of volume in D, da is an element of area on D, and n


is a unit vector
orthogonal to the surface D pointing outwards (once again this is a matter of conventional choice,
if n
is taken to be pointing inwards, this gives the same numerical values of the integrals but with
a minus sign).

9.4 Interpretation

Now these three theorems look complicated and indeed they are. Their geometrical meanings are
something that might involve a lecture each. But the point I would like to make is the following: Is
there a pattern here? Apparently there is: For each case we define something called D; technically
known as a manifold, which in simple terms is just a collection of adjacent points. If the points

20
are two dimensional, D is a surface. If the points are three dimensional, D is a volume. And in
both cases we define another manifold D, whose only property is that it completely surrounds
D. It can be a contour surrounding a surface, or a surface surrounding a volume. We say that the
manifold D is the boundary of the manifold D; always one dimension less than D. A contour is a
one dimensional manifold surrounding a two dimensional one, while a surface is a two dimensional
manifold surrounding a three dimensional one. Furthermore, in both cases there is an arbitrary
vector field F, whose D integral is in all cases related to the derivative of F in D! When I say
derivative of F, I am speaking in a very general way, it can take the forms of a divergence or a
curl.
Wait, there is more! What if D itself was a contour? What would its boundary D look like?
If the contour D is open, then its boundary D is just the two points at its end, as you can see in
the figure.
y

D
b

D
a
D
x

Since points are one dimension less than a contour, then the pattern seems to hold here as well.
In fact, given a vector field F we can write:
Z
F|D = F dr
D
Zb
F (b) F (a) = F dr (73)
a

which you must have seen before in your multi-variable calculus class. This is known as the
fundamental theorem of vector calculus. It once again relates the derivative of a vector field over

21
a manifold (a contour in this case) to its value on the boundary (the two end points). In fact, if we
make the line perfectly straight, this immediately reduces to the fundamental theorem of calculus,
which you have known since your very first calculus class:
Z
F |D = F 0 dx
D
Zb
F (b) F (a) = F 0 dx, (74)
a

where F 0 is the standard notation for the derivative of F with respect to x.


If we now stare at these equations and think about the pattern, then we can write down a
symbolic equation as follows. Given the quantity , its total value on a boundary is equivalent to
the total value of its own derivative d inside that boundary:
Z Z
= d (75)
D D

where the integrals can be single, double, or triple, depending on what D and D are. The integral
can even be a zero integral, i.e. just the value of the function evaluated on a point, as in both
(73) and (74). Furthermore the quantity can be F , F, F n
da, or F dr. The derivative d of
can correspond to F 0 , F dr, FdV , or ( F) n
da respectively.
This sequence is not arbitrary or random. The quantities and d have precise definitions
that yield each of their respective values in one, two, or three dimensions. In fact, and d
are so precisely defined that they have specific forms in any number of dimensions; in which case
the formula (75), known as the generalized Stokes theorem, can take a precise, as well as
calculable, meaning. The quantity is known as a differential form, and d is known as the
exterior derivative of . Saying any more will take us too far afield in a branch of mathematics
known as differential geometry. It can be thought of as that branch of mathematics that studies
the pattern that we observed earlier and generalizes it to any number of dimensions. I will stop
here, it is enough for our purposes just to acknowledge that the pattern exists.

9.5 Applications

As noted, the theorems discussed in this section are all generalizations of the fundamental theorem
of calculus (74). And just like it, these theorems can be used to relate scalar and vector functions
with their derivatives. They can also be used to transform integral equations into differential

22
equations. As an example, consider the theory of electricity and magnetism. In introductory
physics courses you most likely learned about the fundamental electromagnetic equations in their
integral form. Recall, for instance, Gauss law of electric fields, it states
Z Z
Q
En
da = (76)
0
D

where E is the electric field, Q is an electric charge, 0 is the permittivity of free space, and the
integration on the left hand side is done of a surface D that completely surrounds the charge Q.
Gauss law in this form relates the flux of the electric field through the surface due to the presence
of Q inside said surface.
The formula is however an integral formula. It is natural to ask if there is a differential form of
Gauss law. In fact, differential forms of the laws of nature are more precise in the sense that they
provide us with an intuition of how a physical quantity varies from one point in space to the next,
while integral forms gives us an overall average over a region of space. To derive the differential
formula, we first note that the total charge Q can be written as an integral of the charge density
as such
Z Z Z
Q= dV (77)
D

where is the charge density in units of Coulomb per cubic meter. Hence, Gauss law (76) can be
written as
Z Z Z Z Z
1
En
da = dV . (78)
0
D D

It is tempting to compare the integrands of both sides of this equation and say that E n
= 0 ,

but this would be the wrong thing to do, since the left side is a double integral over a surface while
the right side is a triple integral over a volume and their integrands cannot just be equal (an easy
way to see this is to check that the dimensions of the integrands do not match). Our task is then
to change either side to make the integrations over the same type of manifold and then we can
compare integrands. But we do have a way to make the left hand side an integral over a volume,
the very same volume D of the right hand side; this would be by applying the divergence theorem
(72), which is also due to Gauss, such that:
Z Z Z Z Z
En
da = EdV (79)
D D

23
which allow us to rewrite (78) as follows:
Z Z Z Z Z Z
1
EdV = dV . (80)
0
D D

Since both sides of (80) are volume integrals over the same volume manifold D, only now can
we compare the integrands and conclude that:


E= (81)
0

which is, in fact, the required differential form of Gauss law (76). All of the formulae of electricity
and magnetism which you studied in the introductory course can now be rewritten in differential
form. As an exercise, consider Amperes law of magnetostatic fields:
I
B dr = 0 I (82)
D

where B is the magnetic field surrounding a current I, 0 is the permeability of free space, and the
integration on the left is done over a closed contour that completely surrounds I. The reader can
use Stokes theorem (71) to show that the differential form of (82) is

( B) = 0 J (83)

where the current density J is defined as the density of the current I inside the surface D that
is surrounded by the closed contour D. Alternatively, I is the flux of the vector field J passing
through D:
Z Z
I= Jn
da. (84)
D

24

Potrebbero piacerti anche