Physics 451
Fall 2004
Homework Assignment #1 — Solutions
Textbook problems: Ch. 1: 1.1.5, 1.3.3, 1.4.7, 1.5.5, 1.5.6 Ch. 3: 3.2.4, 3.2.19, 3.2.27
Chapter 1
1.1.5
A sailboat sails for 1 hr at 4 km/hr (relative to the water) on a steady compass heading of 40 ^{◦} east of north. The saiboat is simultaneously carried along by a current. At the end of the hour the boat is 6.12 km from its starting point., The line from its starting point to its location lies 60 ^{◦} east of north. Find the x (easterly) and y (northerly) components of the water velocity.
This is a straightforward relative velocity (vector addition) problem. Let
denote the velocity of the boat with respect to land, v _{b}_{w} the velocity of the boat with respect to the water and v _{w}_{l} the velocity of the water with respect to land. Then
v
bl
v _{b}_{l} = v bw +
v
wl
^{w}^{h}^{e}^{r}^{e}
v _{b}_{w} = 4 km/hr @ 50 ^{◦} = (2 .57ˆx + 3.06ˆy ) km /hr
1.3.3
v _{b}_{l} = 6.12 km /hr @ 30 ^{◦} = (5 .3ˆx + 3.06ˆy ) km /hr
Thus
v _{w}_{l} = v _{b}_{l} − v _{b}_{w} = 2.73ˆx km /hr
The vector r , starting at the origin, terminates at and speciﬁes the point in space (x, y, z ). Find the surface swept out by the tip of r if
(a) ( r − a ) · a = 0
The vanishing of the dot product indicates that the vector r − a is perpendicular to the constant vector a. As a result, r − a must lie in a plane perpendicular to a. This means r itself must lie in a plane passing through the tip of a and perpendicular to a
(b ) ( r − a ) · r = 0
This time the vector r − a has to be perpendicular to the position vector r itself. It is perhaps harder to see what this is in three dimensions. However, for two dimensions, we ﬁnd
which gives a circle. In three dimensions, this is a sphere. Note that we can also complete the square to obtain
( r − a ) · r =  r − _{2} a  ^{2} −  _{2} a  ^{2}
1
1
Hence we end up with the equation for a circle of radius  a /2 centered at the point a/ 2
 r − ^{1} _{2} a  ^{2} =  ^{1} _{2} a  ^{2}
1.4.7 Prove that ( A × B ) · ( A × B ) = ( AB ) ^{2} − ( A · B ) ^{2} .
This can be shown just by a straightforward computation. Since
A × B = ( A _{y} B _{z} − A _{z} B _{y} )ˆx + (A _{z} B _{x} − A _{x} B _{z} )ˆy + (A _{x} B _{y} − A _{y} B _{x} )ˆz
we ﬁnd
 A × B  ^{2} =
( A _{y} B _{z} − A _{z} B _{y} ) ^{2} + (A _{z} B _{x} − A _{x} B _{z} ) ^{2} + (A _{x} B _{y} − A _{y} B _{x} ) ^{2}
= A
_{x} 2 + A
− 2A _{x} B _{x} A _{y} B _{y} − 2 A _{x} B _{x} A _{z} B _{z} − 2A _{y} B _{y} A _{z} B _{z}
2
x
B
2
y
+ A
2
x
B
z
+ A
2
y
B
_{x}
+ A
2
2
y 2 B
2
z
+ A
z 2 B
z 2 B
2
y
= (A _{x} + A _{y} + A _{z} )(B
2
2
2
2
_{x}
+ B + B ) − (A _{x} B _{x} + A _{y} B _{y} + A _{z} B _{z} ) ^{2}
2
y
2
z
where we had to add and subtract A to obtain the last line.
However, there is a more elegant approach to this problem. Recall that cross products are related to sin θ and dot products are related to cos θ . Then
and do some factorization
x
B
_{x} + A
2
2
y B
y
2
2
+
A
z B
z
2
2
 A × B  ^{2} = (AB sin θ ) ^{2} = ( AB ) ^{2} (1 − cos ^{2} θ ) = ( AB ) ^{2} − (AB cos θ ) ^{2} = (AB ) ^{2} − ( A · B ) ^{2}
1.5.5
The orbital angular momentum L of a particle is given by L = r × p = m r × v where p is the linear momentum. With linear and angular velocity related by v = ω × r , show that
L = mr ^{2} [ ω − rˆ(ˆr · ω )]
Here, rˆ is a unit vector in the r direction.
Using L = m r × v and v = ω × r , we ﬁnd
L = m r × ( ω × r )
Because of the double cross product, this is the perfect opportunity to use the “BAC–CAB” rule: A × ( B × C ) = B ( A · C ) − C ( A · B )
L = m[ ω ( r · r ) − r ( r · ω )] = m[ ωr ^{2} − r ( r · ω )]
Using r = r rˆ, and factoring out r ^{2} , we then obtain
1.5.6
L = mr ^{2} [ ω − rˆ(ˆr · ω )]
(1)
1
The kinetic energy of a single particle is given by T = _{2} mv ^{2} . For rotational motion this becomes ^{1} _{2} m( ω × r ) ^{2} . Show that
T = _{2} m [r ^{2} ω ^{2} − ( r · ω ) ^{2} ]
1
We can use the result of problem 1.4.7:
T = _{2} m( ω × r ) ^{2} = _{2} m[(ωr ) ^{2} − ( ω · r ) ^{2} ] = ^{1} _{2} m [r ^{2} ω ^{2} − ( r · ω ) ^{2} ]
1
1
Note that we could have written this in terms of unit vectors
T = _{2} mr ^{2} [ω ^{2} − (ˆr · ω ) ^{2} ]
1
Comparing this with (1) above, we ﬁnd that
T = ^{1} L · ω
2
which is not a coincidence.
Chapter 3
3.2.4 ( a) Complex numbers, a + ib , with a and b real, may be represented by (or are isomorphic with) 2 × 2 matrices:
a + ib
↔
a
−b
b
a
Show that this matrix representation is valid for (i) addition and (ii) multiplica tion.
Let us start with addition. For complex numbers, we have (straightforwardly)
(a + ib) + (c + id ) = ( a + c) + i(b + d )
whereas, if we used matrices we would get
a
−b
c a ^{+} −d
b
d
c
=
(a + c) −(b + d )
(b + d ) (a + c)
which shows that the sum of matrices yields the proper representation of the complex number (a + c) + i(b + d ).
We now handle multiplication in the same manner. First, we have
(a + ib)(c + id ) = ( ac − bd ) + i(ad + bc)
while matrix multiplication gives
a
−b
c a −d
b
d
c
=
(ac − bd ) −(ad + bc)
which is again the correct result.
(ad + bc) (ac − bd )
(b ) Find the matrix corresponding to ( a + ib ) ^{−}^{1} .
We can ﬁnd the matrix in two ways. We ﬁrst do standard complex arithmetic
(a + ib) ^{−}^{1} =
1 a − ib
a + ib ^{=} (a + ib)(a − ib) ^{=}
This corresponds to the 2 × 2 matrix
(a + ib) ^{−}^{1}
↔
a ^{2} + b ^{2} a b
1
1
_{a} _{2} _{+} _{b} _{2} (a − ib )
−b
a
Alternatively, we ﬁrst convert to a matrix representation, and then ﬁnd the inverse matrix
(a + ib ) ^{−}^{1}
↔
a
−b
a b −1
=
a ^{2} + b ^{2} a b
1
−b
a
Either way, we obtain the same result.
3.2.19 An operator P commutes with J _{x} and J _{y} , the x and y components of an angular momentum operator. Show that P commutes with the third component of angular momentum; that is,
[ P , J _{z} ] = 0
We begin with the statement that P commutes with J _{x} and J _{y} . This may be expressed as [ P , J _{x} ] = 0 and [ P , J _{y} ] = 0 or equivalently as P J _{x} = J _{x} P and P J _{y} = J _{y} P . We also take the hint into account and note that J _{x} and J _{y} satisfy the commutation relation
[J _{x} , J _{y} ] = iJ _{z}
or equivalently J _{z} = −i[J _{x} , J _{y} ]. Substituting this in for J _{z} , we ﬁnd the double commutator
[ P , J _{z} ] = [ P , −i[J _{x} , J _{y} ]] = −i[ P , [J _{x} , J _{y} ]]
Note that we are able to pull the −i factor out of the commutator. From here, we may expand all the commutators to ﬁnd
[ P , [J _{x} , J _{y} ]] = 
P J _{x} J _{y} − P J _{y} J _{x} − J _{x} J _{y} P + 
J _{y} J _{x} P 
= 
J _{x} P J _{y} − J _{y} P J _{x} − J _{x} P J _{y} + J _{y} P J _{x} 

= 
0 
To get from the ﬁrst to the second line, we commuted P past either J _{x} or J _{y} as appropriate. Of course, a quicker way to do this problem is to use the Jacobi identity [A, [B, C ]] = [B, [A, C ]] − [C, [A, B ]] to obtain
[ P , [J _{x} , J _{y} ]] = [J _{x} , [ P , J _{y} ]] − [J _{y} , [ P , J _{x} ]]
The right hand side clearly vanishes, since P commutes with both J _{x} and J _{y} .
3.2.27 (a) The operator Tr replaces a matrix A by its trace; that is
Tr ( a) = trace(A) = ^{}
i
a ii
Show that Tr is a linear operator.
Recall that to show that Tr is linear we may prove that Tr (αA + βB ) = α Tr ( A)+ β Tr ( B ) where α and β are numbers. However, this is a simple property of arithmetic
Tr ( αA + βB ) = ^{} (αa _{i}_{i} + βb _{i}_{i} ) = α ^{} a _{i}_{i} + β ^{} b _{i}_{i} = α Tr ( A) + β Tr ( B )
i
i
i
(b ) The operator det replaces a matrix A by its determinant; that is
det(A) = determinant of A
Show that det is not a linear operator.
In this case all we need to do is to ﬁnd a single counterexample. For example, for an n × n matrix, the properties of the determinant yields
det(αA ) = α ^{n} det(A)
This is not linear unless n = 1 (in which case A is really a single number and not a matrix). There are of course many other examples that one could come up with to show that det is not a linear operator.
Physics 451
Fall 2004
Homework Assignment #2 — Solutions
Textbook problems: Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30
Chapter 3
3.3.1 Show that the product of two orthogonal matrices is orthogonal.
Suppose matrices A and B are orthogonal. This means that A A = I and B B = I . We now denote the product of A and B by C = AB . To show that C is orthogonal,
we compute C C and see what happens. Recalling that the transpose of a product
is the reversed product of the transposes, we have
C C = (AB )( AB ) = AB A = A A = I
B
The statement that this is a key step in showing that the orthogonal matrices form
a group is because one of the requirements of being a group is that the product
of any two elements (ie A and B ) in the group yields a result (ie C ) that is also in the group. This is also known as closure. Along with closure, we also need
to show associativity (okay for matrices), the existence of an identity element (also okay for matrices) and the existence of an inverse (okay for orthogonal matrices). Since all four conditions are satisﬁed, the set of n × n orthogonal matrices form the orthogonal group denoted O (n ). While general orthogonal matrices have determinants ±1, the subgroup of matrices with determinant +1 form the “special orthogonal” group SO (n ).
3.3.12 A is 2 × 2 and orthogonal. Find the most general form of
A = ^{a}
c
^{b}
d
Compare with twodimensional rotation.
Since A is orthogonal, it must satisfy the condition A A = I , or
^{} a
c
b ^{}^{} a d
b
This gives three conditions
i) a ^{2} + b ^{2} = 1,
d = c
a ^{2}
ac + bd
+ b ^{2}
ac + bd c ^{2} + d ^{2}
=
1
0
0
1
ii) c ^{2} + d ^{2} = 1 ,
iii) ac + bd = 0
These are three equations for four unknowns, so there will be a free parameter left over. There are many ways to solve the equations. However, one nice way is
to notice that a ^{2} + b ^{2} = 1 is the equation for a unit circle in the a–b plane. This means we can write a and b in terms of an angle θ
a = cos θ,
b = sin θ
Similarly, c ^{2} + d ^{2} = 1 can be solved by setting
c = cos φ,
d = sin φ
Of course, we have one more equation to solve, ac + bd = 0, which becomes
cos θ cos φ + sin θ sin φ = cos( θ − φ) = 0
This means that θ − φ = π/2 or θ − φ = 3 π/2. We must consider both cases separately.
φ 
= θ − π/2: This gives 

c = cos( θ − π/2) = sin θ, 
d = sin( θ − π/2) = − cos θ 

^{o}^{r} 
_{A} _{1} _{=} ^{} cos θ sin θ
sin θ
− cos
_{θ}
(1)
This looks almost like a rotation, but not quite (since the minus sign is in the wrong place).
φ 
= θ − 3 π/2: This gives 

c = cos( θ − 3 π/2) = − sin θ, 
d = sin( theta − 3π/2) = cos θ 

or 
A _{2} =
cos θ − sin θ
sin θ
cos
_{θ}
(2)
which is exactly a rotation.
Note that we can tell the diﬀerence between matrices of type (1) and (2) by computing the determinant. We see that det A _{1} = −1 while det A _{2} = 1. In fact, the A _{2} type of matrices form the SO (2) group, which is exactly the group of rotations in the plane. On the other hand, the A _{1} type of matrices represent rotations followed by a mirror reﬂection y → −y . This can be seen by writing
A _{1} =
1
0
−1 − sin θ
0
cos θ
sin θ cos θ
Note that the set of A _{1} matrices by themselves do not form a group (since they do not contain the identity, and since they do not close under multiplication). However the set of all orthogonal matrices { A _{1} , A _{2} } forms the O (2) group, which is the group of rotations and mirror reﬂections in two dimensions.
3.3.13 Here  x and  y are column vectors. Under an orthogonal transformation S ,  x ^{} = S  x ,  y ^{} = S  y . Show that the scalar product x  y is invariant under this orthog onal transformation.
To prove the invariance of the scalar product, we compute
x ^{}  y ^{} = x  SS  y = x  y
where we used SS = I for an orthogonal matrix S . This demonstrates that the scalar product is invariant (same in primed and unprimed frame).
3.5.4 Show that a real matrix that is not symmetric cannot be diagonalized by an orthogonal similarity transformation.
We take the hint, and start by denoting the real nonsymmetric matrix by A. Assuming that A can be diagonalized by an orthogonal similarity transformation, that means there exists an orthogonal matrix S such that
Λ = SA S
where Λ is diagonal
We can ‘invert’ this relation by multiplying both sides on the left by S and on the right by S . This yields
A = S Λ S
Taking the transpose of A, we ﬁnd
A = ( S Λ S ) = S S
Λ
However, the transpose of a transpose is the original matrix, S = S , and the transpose of a diagonal matrix is the original matrix, Λ = Λ. Hence
A = S Λ S = A
Since the matrix A is equal to its transpose, A has to be a symmetric matrix.
However, recall that A is supposed to be nonsymmetric. Hence we run into a contradiction. As a result, we must conclude that A cannot be diagonalized by an orthogonal similarity transformation.
3.5.6
A has eigenvalues λ _{i} and corresponding eigenvectors  x _{i} . Show that A ^{−}^{1} has the
same eigenvectors but with eigenvalues λ
−1
i
^{.}
If A has eigenvalues λ _{i} and eigenvectors  x _{i} , that means
A x _{i} = λ _{i}  x i
Multiplying both sides by A ^{−}^{1} on the left, we ﬁnd
or
Rewriting this as
A ^{−}^{1} A x _{i} = λ _{i} A ^{−}^{1}  x i
 x _{i} = λ _{i} A ^{−}^{1}  x i
A ^{−}^{1}  x _{i} = λ
−1
i

x i
it is now obvious that A ^{−}^{1} has the same eigenvectors, but eigenvalues λ
−1
i
^{.}
3.5.9 
Two Hermitian matrices A and B have the same eigenvalues. Show that A and B are related by a unitary similarity transformation. 

Since both A and B have the same eigenvalues, they can both be diagonalized according to 

Λ = UAU ^{†} , 
Λ = V BV ^{†} 

where Λ is the same diagonal matrix of eigenvalues. This means 

UAU ^{†} = V BV ^{†} 
⇒ 
B = V ^{†} UAU ^{†} V 

If we let W = V ^{†} U , its Hermitian conjugate is W ^{†} = (V ^{†} U ) ^{†} = U ^{†} V . This means that 

B 
= W AW ^{†} 
where W = V ^{†} U 

and WW ^{†} = V ^{†} UU ^{†} V = I . Hence A and B are related by a unitary similarity transformation. 

3.5.30 
a) Determine the eigenvalues and eigenvectors of 
^{} 1
1
Note that the eigenvalues are degenerate for = 0 but the eigenvectors are or thogonal for all = 0 and → 0.
We ﬁrst ﬁnd the eigenvalues through the secular equation
1 − λ
This is easily solved
(1 − λ ) ^{2} − ^{2} = 0
⇒
1 −
_{} = (1 − λ ) ^{2} − ^{2} = 0
(λ − 1) ^{2} = ^{2}
⇒
(λ − 1) = ±
(3)
Hence the two eigenvalues are λ _{+} = 1 + and λ _{−} = 1 − .
For the eigenvectors, we start with λ _{+} = 1 + . Substituting this into the eigen value problem (A − λI )x = 0, we ﬁnd
^{} −
−
a
b
= 0
⇒
(a − b ) = 0
⇒
a = b
Since the problem did not ask to normalize the eigenvectors, we can take simply
λ _{+} = 1 + :
x _{+} = ^{1} 1
For λ _{−} = 1 − , we obtain instead
^{}
This gives
a
b
= 0
⇒
λ _{−} = 1 − :
(a + b ) = 0
x _{−} =
−1
1
⇒
a = −b
Note that the eigenvectors x _{+} and x _{−} are orthogonal and independent of . In a way, we are just lucky that they are independent of (they did not have to turn out that way). However, orthogonality is guaranteed so long as the eigenvalues are distinct (ie = 0). This was something we proved in class.
b ) Determine the eigenvalues and eigenvectors of
1
^{2}
1
1
Note that the eigenvalues are degenerate for = 0 and for this (nonsymmetric) matrix the eigenvectors ( = 0) do not span the space.
In this nonsymmetric case, the secular equation is
1 − λ ^{2}
1
1 − λ
_{} = (1 − λ ) ^{2} − ^{2} = 0
Interestingly enough, this equation is the same as (3), even though the matrix is diﬀerent. Hence this matrix has the same eigenvalues λ _{+} = 1 + and λ _{−} = 1 − .
For λ _{+} = 1 + , the eigenvector equation is
^{} − ^{2}
1
a
b
= 0
−
Up to normalization, this gives
⇒
λ _{+} = 1 + :
− a + b = 0
x _{+} = ^{1} _{}
For the other eigenvalue, λ _{−} = 1 − , we ﬁnd
^{2}
a
1
b
Hence, we obtain
= 0
⇒
λ _{−} = 1 − :
a + b = 0
x _{−} =
− _{}
1
⇒
b = a
(4) 

⇒ 
b = − a 

(5) 
In this nonsymmetric case, the eigenvectors do depend on . And furthermore,
1 0 ^{.}
c) Find the cosine of the angle between the two eigenvectors as a function of for 0 ≤ ≤ 1.
For the eigenvectors of part a), they are orthogonal, so the angle is 90 ^{◦} . Thus this part really refers to the eigenvectors of part b ). Recalling that the angle can be deﬁned through the inner product, we have
x _{+} x _{−} = x _{+}  x _{−}  cos θ
or
when = 0 it is easy to see that both eigenvectors degenerate into the same
cos θ =
^{} ^{x} ^{+} ^{}^{x} ^{−} ^{}
x _{+} x _{+} ^{1} ^{/} ^{2} x _{−} x _{−} ^{1} ^{/} ^{2}
Using the eigenvectors of (4) and (5), we ﬁnd
cos θ =
1 + ^{2}
^{√} 1 + ^{2} ^{√} 1 + ^{2} ^{=} 1 − 2
^{1} ^{−} ^{} ^{2}
Recall that the CauchySchwarz inequality guarantees that cos θ lies between −1 and +1. When = 0 we ﬁnd cos θ = 1, so the eigenvectors are collinear (and degenerate), while for = 1, we ﬁnd instead cos θ = 0, so the eigenvectors are orthogonal.
Physics 451
Fall 2004
Homework Assignment #3 — Solutions
Textbook problems: Ch. 1: 1.7.1, 1.8.11, 1.8.16, 1.9.12, 1.10.4, 1.12.9 Ch. 2: 2.4.8, 2.4.11
Chapter 1
1.7.1 For a particle moving in a circular orbit r = xˆ r cos ωt + yˆ r sin ωt
˙
(a) evaluate r × r
Taking a time derivative of r , we obtain
Hence
˙
˙
r = −xˆ rω sin ωt + yˆ rω cos ωt
(1)
r × r = (ˆx r cos ωt + yˆ r sin ωt ) × (−xˆ rω sin ωt + yˆ rω cos ωt )
= (ˆx × yˆ)r ^{2} ω cos ^{2} ωt − (ˆy × xˆ )r ^{2} ω sin ^{2} ωt
= zˆ r ^{2} ω (sin ^{2} ωt + cos ^{2} ωt ) = zˆ r ^{2} ω
¨
(b ) Show that r + ω ^{2} r = 0
The acceleration is the time derivative of (1)
¨ r = −xˆ rω ^{2} cos ωt − yˆ rω ^{2} sin ωt = −ω ^{2} (ˆx r cos ωt + yˆ r sin ωt ) = −ω ^{2} r
¨
Hence r + ω ^{2} r = 0. This is of course the standard kinematics of uniform circular
motion.
1.8.11 Verify the vector identity
∇ × ( A × B ) = ( B · ∇ ) A − ( A · ∇ ) B − B ( ∇ · A) + A( ∇ · B )
This looks like a good time for the BAC–CAB rule. However, we have to be careful since ∇ has both derivative and vector properties. As a derivative, it operates on both A and B . Therefore, by the product rule of diﬀerentiation, we can write
↓
↓
∇ × ( A × B ) = ∇ × ( A × B ) + ∇ × ( A × B )
where the arrows indicate where the derivative is acting. Now that we have speciﬁed exactly where the derivative goes, we can treat ∇ as a vector. Using the BAC–CAB rule (once for each term) gives
↓
↓
↓
↓
∇ × ( A × B ) = A( ∇ · B ) − B ( ∇ · A) + A( ∇ · B ) − B ( ∇ · A)
(2)
The ﬁrst and last terms on the right hand side are ‘backwards’. However, we can turn them around. For example
↓
A( ∇ · B ) = A( B · ∇ ) = B · ∇ ) A
↓
↓
(
With all the arrows in the right place [after ﬂipping the ﬁrst and last terms in (2)], we ﬁnd simply
∇
× ( A × B ) = ( B · ∇ ) A − B ( ∇ · A) + A( ∇ · B ) − ( A · ∇ ) B
which is what we set out to prove.
1.8.16 An electric dipole of moment p is located at the origin. The dipole creates an electric potential at r given by
ψ
( r
_{)} _{=} p · r
4
π _{o} r ^{3}
Find the electric ﬁeld, E = − ∇ ψ at r .
We ﬁrst use the quotient rule to write
E = − ∇ ψ = − 4π ^{1} _{0}
_{∇} ^{} p · r r
^{3}
= − 4π ^{1} _{0}
r ^{3} ∇ (p · r ) − (p · r ) ∇ (r ^{3} )
r ^{6}
Applying the chain rule to the second term in the numerator, we obtain
E = − 4π ^{1} _{0}
r ^{3} ∇ (p · r ) − 3r ^{2} (p · r ) ∇ (r )
r ^{6}
We now evaluate the two separate gradients
and
∂
i p j ^{∂}^{x} ∂x ^{j}
_{i}
∇ (p · r ) = xˆ _{i} _{∂}_{x} i (p _{j} x _{j} ) = x
= xˆ _{i} p _{j} δ _{i}_{j} = xˆ _{i} p _{i} = p
∇ r = xˆ _{i}
∂x _{i} ^{x} ^{2}
∂
1
+ x ^{2} _{2} + x _{3} = xˆ _{i}
2
1
2 ^{} x ^{2} + x _{2} ^{2} + x
1
2
3
2 x _{i} = ^{x}^{ˆ} ^{i} ^{x} ^{i}
r
_{=} r
r
= rˆ
Hence
E = − 4π ^{1} _{0}
r ^{3} p − 3r ^{2} (p · r )ˆr
r
^{6}
= − 4π ^{1} _{0}
− 3(p · rˆ)ˆr
p
r
^{3}
Note that we have used the fact that p is a constant, although this was never stated in the problem.
1.9.12 Show that any solution of the equation
∇ × ( ∇ × A) − k ^{2} A = 0
automatically satisﬁes the vector Helmholtz equation
∇ ^{2} A + k ^{2} A = 0
and the solenoidal condition
∇ · A = 0
We actually follow the hint and demonstrate the solenoidal condition ﬁrst. Taking the divergence of the ﬁrst equation, we ﬁnd
∇ · ∇ × ( ∇ × A) − k ^{2} ∇ · A = 0
However, the divergence of a curl vanishes identically. Hence the ﬁrst term is automatically equal to zero, and we are left with k ^{2} ∇ · A = 0 or (upon dividing by the constant k ) ∇ · A = 0.
We now return to the ﬁrst equation and simplify the double curl using the BAC– CAB rule (taking into account the fact that all derivatives must act on A)
∇ × ( ∇ × A) = ∇ ( ∇ · A) − ∇ ^{2} A
(3)
As a result, the ﬁrst equation becomes
∇ ( ∇ · A) − ∇ ^{2} A − k ^{2} A = 0
However, we have shown above that ∇· A = 0 for this problem. Thus (3) reduces to
∇ ^{2} A + k ^{2} A = 0
which is what we wanted to show.
1.10.4
Evaluate ^{} r · d r
We have evaluated this integral in class. For a line integral from point 1 to point 2, we have
_{} _{2}
1
r · d r =
1
2 2
1
d (r ^{2} ) =
2 1 ^{r} ^{2} 1 ^{2} ^{=}
1
2 ^{r}
2
2
−
1
^{1} _{2} r ^{2}
However for a closed path, point 1 and point 2 are the same. Thus the integral along a closed loop vanishes, ^{} r · d r = 0. Note that this vanishing of the line integral around a closed loop is the sign of a conservative force.
Alternatively, we can apply Stokes’ theorem
r · d r
^{=}
S
∇ × r · d σ
It is easy to see that r is curlfree. Hence the surface integral on the right hand side vanishes.
1.12.9 Prove that
_{} u
∇ v · d ^{} λ = − v ∇ u · d λ
This is an application of Stokes’ theorem. Let us write
(u ∇ v + v ∇ u ) · d λ
We now expand the curl using
^{=} S
∇ × (u ∇ v + v ∇ u ) · d σ
(4)
∇ × (u ∇ v ) = ( ∇ u ) × ( ∇ v ) + u ∇ × ∇ v = ( ∇ u ) × ( ∇ v )
where we have also used the fact that the curl of a gradient vanishes. Returning to (4), this indicates that
(u ∇ v + v ∇ u ) · d ^{} λ = S [( ∇ u ) × ( ∇ v ) + ( ∇ v ) × ( ∇ u )] · d σ = 0
where the vanishing of the right hand side is guaranteed by the antisymmetry of the crossproduct, A × B = − B × A.
Chapter 2
2.4.8 Find the circular cylindrical components of the velocity and acceleration of a moving particle
We ﬁrst explore the time derivatives of the cylindrical coordinate basis vectors. Since
ρˆ = (cos ϕ, sin ϕ, 0),
ϕˆ = (− sin ϕ, cos ϕ, 0),
zˆ = (0 , 0 , 1)
their derivatives are
ρˆ
∂
_{∂}_{ϕ} = (− sin ϕ, cos ϕ, 0) = ϕ,ˆ
Using the chain rule, this indicates that
∂
ρˆ
˙
ρˆ = _{∂}_{ϕ} ϕ˙ = ϕˆϕ,˙
^{∂} ∂ϕ ^{ϕ}^{ˆ} = (− cos ϕ, − sin ϕ, 0) = −ρˆ
ϕˆ = ^{∂} ^{ϕ}^{ˆ} ϕ˙ = −ρˆϕ˙
˙
∂ϕ
(5)
Now, we note that the position vector is given by
r = ρρˆ + zzˆ
So all we have to do to ﬁnd the velocity is to take a time derivative
˙
˙
˙
v = r = ρˆρ˙ + zˆz˙ + ρρˆ + zzˆ = ρˆρ˙ + zˆz˙ + ϕρˆ ϕ˙
˙
Note that we have used the expression for ρˆ in (5). Taking one more time deriva
tive yields the acceleration
˙
˙
˙
˙
a = v = ρˆρ¨ + zˆz¨ + ϕˆ(ρ ϕ¨ + ρ˙ ϕ˙ ) + ρˆρ˙ + zˆz˙ + ϕρˆ ϕ˙
= ρˆρ¨ + zˆz¨ + ϕˆ(ρ ϕ¨ + ρ˙ ϕ˙ ) + ϕˆρ˙ ϕ˙ − ρρˆ ϕ˙ ^{2}
= ρˆ(¨ρ − ρ ϕ˙ ^{2} ) + zˆz¨ + ϕˆ(ρ ϕ¨ + 2ρ˙ ϕ˙ )
2.4.11 For the ﬂow of an incompressible viscous ﬂuid the NavierStokes equations lead to
− ∇ × ( v × ( ∇ ×
Molto più che documenti.
Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.
Annulla in qualsiasi momento.