Sei sulla pagina 1di 79

Physics 451

Fall 2004

Homework Assignment #1 — Solutions

Textbook problems: Ch. 1: 1.1.5, 1.3.3, 1.4.7, 1.5.5, 1.5.6 Ch. 3: 3.2.4, 3.2.19, 3.2.27

Chapter 1

1.1.5

A sailboat sails for 1 hr at 4 km/hr (relative to the water) on a steady compass heading of 40 east of north. The saiboat is simultaneously carried along by a current. At the end of the hour the boat is 6.12 km from its starting point., The line from its starting point to its location lies 60 east of north. Find the x (easterly) and y (northerly) components of the water velocity.

This is a straightforward relative velocity (vector addition) problem. Let

denote the velocity of the boat with respect to land, v bw the velocity of the boat with respect to the water and v wl the velocity of the water with respect to land. Then

v

bl

v bl = v bw +

v

wl

where

v bw = 4 km/hr @ 50 = (2 .57ˆx + 3.06ˆy ) km /hr

1.3.3

v bl = 6.12 km /hr @ 30 = (5 .x + 3.06ˆy ) km /hr

Thus

v wl = v bl v bw = 2.73ˆx km /hr

The vector r , starting at the origin, terminates at and specifies the point in space (x, y, z ). Find the surface swept out by the tip of r if

(a) ( r a ) · a = 0

The vanishing of the dot product indicates that the vector r a is perpendicular to the constant vector a. As a result, r a must lie in a plane perpendicular to a. This means r itself must lie in a plane passing through the tip of a and perpendicular to a

r−a r a
r−a
r
a

(b ) ( r a ) · r = 0

This time the vector r a has to be perpendicular to the position vector r itself. It is perhaps harder to see what this is in three dimensions. However, for two dimensions, we find

r−a a r
r−a
a
r

which gives a circle. In three dimensions, this is a sphere. Note that we can also complete the square to obtain

( r a ) · r = | r 2 a | 2 − | 2 a | 2

1

1

Hence we end up with the equation for a circle of radius | a |/2 centered at the point a/ 2

| r 1 2 a | 2 = | 1 2 a | 2

1.4.7 Prove that ( A × B ) · ( A × B ) = ( AB ) 2 ( A · B ) 2 .

This can be shown just by a straightforward computation. Since

A × B = ( A y B z A z B y x + (A z B x A x B z y + (A x B y A y B x z

we find

| A × B | 2 =

( A y B z A z B y ) 2 + (A z B x A x B z ) 2 + (A x B y A y B x ) 2

= A

x 2 + A

2A x B x A y B y 2 A x B x A z B z 2A y B y A z B z

2

x

B

2

y

+ A

2

x

B

z

+ A

2

y

B

x

+ A

2

2

y 2 B

2

z

+ A

z 2 B

z 2 B

2

y

= (A x + A y + A z )(B

2

2

2

2

x

+ B + B ) (A x B x + A y B y + A z B z ) 2

2

y

2

z

where we had to add and subtract A to obtain the last line.

However, there is a more elegant approach to this problem. Recall that cross products are related to sin θ and dot products are related to cos θ . Then

and do some factorization

x

B

x + A

2

2

y B

y

2

2

+

A

z B

z

2

2

| A × B | 2 = (AB sin θ ) 2 = ( AB ) 2 (1 cos 2 θ ) = ( AB ) 2 (AB cos θ ) 2 = (AB ) 2 ( A · B ) 2

1.5.5

The orbital angular momentum L of a particle is given by L = r × p = m r × v where p is the linear momentum. With linear and angular velocity related by v = ω × r , show that

L = mr 2 [ ω rˆ(ˆr · ω )]

Here, rˆ is a unit vector in the r direction.

Using L = m r × v and v = ω × r , we find

L = m r × ( ω × r )

Because of the double cross product, this is the perfect opportunity to use the “BAC–CAB” rule: A × ( B × C ) = B ( A · C ) C ( A · B )

L = m[ ω ( r · r ) r ( r · ω )] = m[ ωr 2 r ( r · ω )]

Using r = r rˆ, and factoring out r 2 , we then obtain

1.5.6

L = mr 2 [ ω rˆ(ˆr · ω )]

(1)

1

The kinetic energy of a single particle is given by T = 2 mv 2 . For rotational motion this becomes 1 2 m( ω × r ) 2 . Show that

T = 2 m [r 2 ω 2 ( r · ω ) 2 ]

1

We can use the result of problem 1.4.7:

T = 2 m( ω × r ) 2 = 2 m[(ωr ) 2 ( ω · r ) 2 ] = 1 2 m [r 2 ω 2 ( r · ω ) 2 ]

1

1

Note that we could have written this in terms of unit vectors

T = 2 mr 2 [ω 2 r · ω ) 2 ]

1

Comparing this with (1) above, we find that

T = 1 L · ω

2

which is not a coincidence.

Chapter 3

3.2.4 ( a) Complex numbers, a + ib , with a and b real, may be represented by (or are isomorphic with) 2 × 2 matrices:

a + ib

a − b

a

b

b

a

Show that this matrix representation is valid for (i) addition and (ii) multiplica- tion.

Let us start with addition. For complex numbers, we have (straightforwardly)

(a + ib) + (c + id ) = ( a + c) + i(b + d )

whereas, if we used matrices we would get

a

b

c a + d

b

d

c

=

(a + c) (b + d )

(b + d ) (a + c)

which shows that the sum of matrices yields the proper representation of the complex number (a + c) + i(b + d ).

We now handle multiplication in the same manner. First, we have

(a + ib)(c + id ) = ( ac bd ) + i(ad + bc)

while matrix multiplication gives

a

b

c a d

b

d

c

=

(ac bd ) (ad + bc)

which is again the correct result.

(ad + bc) (ac bd )

(b ) Find the matrix corresponding to ( a + ib ) 1 .

We can find the matrix in two ways. We first do standard complex arithmetic

(a + ib) 1 =

1 a ib

a + ib = (a + ib)(a ib) =

This corresponds to the 2 × 2 matrix

(a + ib) 1

a 2 + b 2 a b

1

1

a 2 + b 2 (a ib )

b

a

Alternatively, we first convert to a matrix representation, and then find the inverse matrix

(a + ib ) 1

a

b

a b 1

=

a 2 + b 2 a b

1

b

a

Either way, we obtain the same result.

3.2.19 An operator P commutes with J x and J y , the x and y components of an angular momentum operator. Show that P commutes with the third component of angular momentum; that is,

[ P , J z ] = 0

We begin with the statement that P commutes with J x and J y . This may be expressed as [ P , J x ] = 0 and [ P , J y ] = 0 or equivalently as P J x = J x P and P J y = J y P . We also take the hint into account and note that J x and J y satisfy the commutation relation

[J x , J y ] = iJ z

or equivalently J z = i[J x , J y ]. Substituting this in for J z , we find the double commutator

[ P , J z ] = [ P , i[J x , J y ]] = i[ P , [J x , J y ]]

Note that we are able to pull the i factor out of the commutator. From here, we may expand all the commutators to find

[ P , [J x , J y ]] =

P J x J y P J y J x J x J y P +

J y J x P

=

J x P J y J y P J x J x P J y + J y P J x

=

0

To get from the first to the second line, we commuted P past either J x or J y as appropriate. Of course, a quicker way to do this problem is to use the Jacobi identity [A, [B, C ]] = [B, [A, C ]] [C, [A, B ]] to obtain

[ P , [J x , J y ]] = [J x , [ P , J y ]] [J y , [ P , J x ]]

The right hand side clearly vanishes, since P commutes with both J x and J y .

3.2.27 (a) The operator Tr replaces a matrix A by its trace; that is

Tr ( a) = trace(A) =

i

a ii

Show that Tr is a linear operator.

Recall that to show that Tr is linear we may prove that Tr (αA + βB ) = α Tr ( A)+ β Tr ( B ) where α and β are numbers. However, this is a simple property of arithmetic

Tr ( αA + βB ) = (αa ii + βb ii ) = α a ii + β b ii = α Tr ( A) + β Tr ( B )

i

i

i

(b ) The operator det replaces a matrix A by its determinant; that is

det(A) = determinant of A

Show that det is not a linear operator.

In this case all we need to do is to find a single counterexample. For example, for an n × n matrix, the properties of the determinant yields

det(αA ) = α n det(A)

This is not linear unless n = 1 (in which case A is really a single number and not a matrix). There are of course many other examples that one could come up with to show that det is not a linear operator.

Physics 451

Fall 2004

Homework Assignment #2 — Solutions

Textbook problems: Ch. 3: 3.3.1, 3.3.12, 3.3.13, 3.5.4, 3.5.6, 3.5.9, 3.5.30

Chapter 3

3.3.1 Show that the product of two orthogonal matrices is orthogonal.

Suppose matrices A and B are orthogonal. This means that A A = I and B B = I . We now denote the product of A and B by C = AB . To show that C is orthogonal,

we compute C C and see what happens. Recalling that the transpose of a product

is the reversed product of the transposes, we have

C C = (AB )( AB ) = AB A = A A = I

B

The statement that this is a key step in showing that the orthogonal matrices form

a group is because one of the requirements of being a group is that the product

of any two elements (ie A and B ) in the group yields a result (ie C ) that is also in the group. This is also known as closure. Along with closure, we also need

to show associativity (okay for matrices), the existence of an identity element (also okay for matrices) and the existence of an inverse (okay for orthogonal matrices). Since all four conditions are satisfied, the set of n × n orthogonal matrices form the orthogonal group denoted O (n ). While general orthogonal matrices have determinants ±1, the subgroup of matrices with determinant +1 form the “special orthogonal” group SO (n ).

3.3.12 A is 2 × 2 and orthogonal. Find the most general form of

A = a

c

b

d

Compare with two-dimensional rotation.

Since A is orthogonal, it must satisfy the condition A A = I , or

a

c

b a d

b

This gives three conditions

i) a 2 + b 2 = 1,

d = c

a 2

ac + bd

+ b 2

ac + bd c 2 + d 2

=

1

0

0

1

ii) c 2 + d 2 = 1 ,

iii) ac + bd = 0

These are three equations for four unknowns, so there will be a free parameter left over. There are many ways to solve the equations. However, one nice way is

to notice that a 2 + b 2 = 1 is the equation for a unit circle in the ab plane. This means we can write a and b in terms of an angle θ

a = cos θ,

b = sin θ

Similarly, c 2 + d 2 = 1 can be solved by setting

c = cos φ,

d = sin φ

Of course, we have one more equation to solve, ac + bd = 0, which becomes

cos θ cos φ + sin θ sin φ = cos( θ φ) = 0

This means that θ φ = π/2 or θ φ = 3 π/2. We must consider both cases separately.

φ

= θ π/2: This gives

c = cos( θ π/2) = sin θ,

d = sin( θ π/2) = cos θ

or

A 1 = cos θ sin θ

sin θ

cos

θ

(1)

This looks almost like a rotation, but not quite (since the minus sign is in the wrong place).

φ

= θ 3 π/2: This gives

c = cos( θ 3 π/2) = sin θ,

d = sin( theta 3π/2) = cos θ

or

A 2 =

cos θ sin θ

sin θ

cos

θ

(2)

which is exactly a rotation.

Note that we can tell the difference between matrices of type (1) and (2) by computing the determinant. We see that det A 1 = 1 while det A 2 = 1. In fact, the A 2 type of matrices form the SO (2) group, which is exactly the group of rotations in the plane. On the other hand, the A 1 type of matrices represent rotations followed by a mirror reflection y → −y . This can be seen by writing

A 1 =

1

0

1 sin θ

0

cos θ

sin θ cos θ

Note that the set of A 1 matrices by themselves do not form a group (since they do not contain the identity, and since they do not close under multiplication). However the set of all orthogonal matrices { A 1 , A 2 } forms the O (2) group, which is the group of rotations and mirror reflections in two dimensions.

3.3.13 Here | x and | y are column vectors. Under an orthogonal transformation S , | x = S | x , | y = S | y . Show that the scalar product x | y is invariant under this orthog- onal transformation.

To prove the invariance of the scalar product, we compute

x | y = x | SS | y = x | y

where we used SS = I for an orthogonal matrix S . This demonstrates that the scalar product is invariant (same in primed and unprimed frame).

3.5.4 Show that a real matrix that is not symmetric cannot be diagonalized by an orthogonal similarity transformation.

We take the hint, and start by denoting the real non-symmetric matrix by A. Assuming that A can be diagonalized by an orthogonal similarity transformation, that means there exists an orthogonal matrix S such that

Λ = SA S

where Λ is diagonal

We can ‘invert’ this relation by multiplying both sides on the left by S and on the right by S . This yields

A = S Λ S

Taking the transpose of A, we find

A = ( S Λ S ) = S S

Λ

However, the transpose of a transpose is the original matrix, S = S , and the transpose of a diagonal matrix is the original matrix, Λ = Λ. Hence

A = S Λ S = A

Since the matrix A is equal to its transpose, A has to be a symmetric matrix.

However, recall that A is supposed to be non-symmetric. Hence we run into a contradiction. As a result, we must conclude that A cannot be diagonalized by an orthogonal similarity transformation.

3.5.6

A has eigenvalues λ i and corresponding eigenvectors | x i . Show that A 1 has the

same eigenvectors but with eigenvalues λ

1

i

.

If A has eigenvalues λ i and eigenvectors | x i , that means

A| x i = λ i | x i

Multiplying both sides by A 1 on the left, we find

or

Rewriting this as

A 1 A| x i = λ i A 1 | x i

| x i = λ i A 1 | x i

A 1 | x i = λ

1

i

|

x i

it is now obvious that A 1 has the same eigenvectors, but eigenvalues λ

1

i

.

3.5.9

Two Hermitian matrices A and B have the same eigenvalues. Show that A and B are related by a unitary similarity transformation.

Since both A and B have the same eigenvalues, they can both be diagonalized according to

 

Λ = UAU ,

Λ = V BV

 

where Λ is the same diagonal matrix of eigenvalues. This means

UAU = V BV

B = V UAU V

If we let W = V U , its Hermitian conjugate is W = (V U ) = U V . This means that

B

= W AW

where W = V U

and WW = V UU V = I . Hence A and B are related by a unitary similarity transformation.

3.5.30

a) Determine the eigenvalues and eigenvectors of

 

1

1

Note that the eigenvalues are degenerate for = 0 but the eigenvectors are or- thogonal for all = 0 and 0.

We first find the eigenvalues through the secular equation

1 λ

This is easily solved

(1 λ ) 2 2 = 0

1

= (1 λ ) 2 2 = 0

(λ 1) 2 = 2

(λ 1) = ±

(3)

Hence the two eigenvalues are λ + = 1 + and λ = 1 .

For the eigenvectors, we start with λ + = 1 + . Substituting this into the eigen- value problem (A λI )|x = 0, we find

a

b

= 0

(a b ) = 0

a = b

Since the problem did not ask to normalize the eigenvectors, we can take simply

λ + = 1 + :

|x + = 1 1

For λ = 1 , we obtain instead

This gives

a

b

= 0

λ = 1 :

(a + b ) = 0

|x =

1

1

a = b

Note that the eigenvectors |x + and |x are orthogonal and independent of . In a way, we are just lucky that they are independent of (they did not have to turn out that way). However, orthogonality is guaranteed so long as the eigenvalues are distinct (ie = 0). This was something we proved in class.

b ) Determine the eigenvalues and eigenvectors of

1

2

1

1

Note that the eigenvalues are degenerate for = 0 and for this (nonsymmetric) matrix the eigenvectors ( = 0) do not span the space.

In this nonsymmetric case, the secular equation is

1 λ 2

1

1 λ

= (1 λ ) 2 2 = 0

Interestingly enough, this equation is the same as (3), even though the matrix is different. Hence this matrix has the same eigenvalues λ + = 1 + and λ = 1 .

For λ + = 1 + , the eigenvector equation is

2

1

a

b

= 0

Up to normalization, this gives

λ + = 1 + :

a + b = 0

|x + = 1

For the other eigenvalue, λ = 1 , we find

2

a

1

b

Hence, we obtain

= 0

λ = 1 :

a + b = 0

|x =

1

b = a

 

(4)

b = a

 

(5)

In this nonsymmetric case, the eigenvectors do depend on . And furthermore,

1 0 .

c) Find the cosine of the angle between the two eigenvectors as a function of for 0 1.

For the eigenvectors of part a), they are orthogonal, so the angle is 90 . Thus this part really refers to the eigenvectors of part b ). Recalling that the angle can be defined through the inner product, we have

x + |x = |x + || x | cos θ

or

when = 0 it is easy to see that both eigenvectors degenerate into the same

cos θ =

x + |x

x + |x + 1 / 2 x |x 1 / 2

Using the eigenvectors of (4) and (5), we find

cos θ =

1 + 2

1 + 2 1 + 2 = 1 2

1 2

Recall that the Cauchy-Schwarz inequality guarantees that cos θ lies between 1 and +1. When = 0 we find cos θ = 1, so the eigenvectors are collinear (and degenerate), while for = 1, we find instead cos θ = 0, so the eigenvectors are orthogonal.

Physics 451

Fall 2004

Homework Assignment #3 — Solutions

Textbook problems: Ch. 1: 1.7.1, 1.8.11, 1.8.16, 1.9.12, 1.10.4, 1.12.9 Ch. 2: 2.4.8, 2.4.11

Chapter 1

1.7.1 For a particle moving in a circular orbit r = xˆ r cos ωt + yˆ r sin ωt

˙

(a) evaluate r × r

Taking a time derivative of r , we obtain

Hence

˙

˙

r = xˆ sin ωt + yˆ cos ωt

(1)

r × r = (ˆx r cos ωt + yˆ r sin ωt ) × (xˆ sin ωt + yˆ cos ωt )

= (ˆx × yˆ)r 2 ω cos 2 ωt y × xˆ )r 2 ω sin 2 ωt

= zˆ r 2 ω (sin 2 ωt + cos 2 ωt ) = zˆ r 2 ω

¨

(b ) Show that r + ω 2 r = 0

The acceleration is the time derivative of (1)

¨ r = xˆ 2 cos ωt yˆ 2 sin ωt = ω 2 x r cos ωt + yˆ r sin ωt ) = ω 2 r

¨

Hence r + ω 2 r = 0. This is of course the standard kinematics of uniform circular

motion.

1.8.11 Verify the vector identity

× ( A × B ) = ( B · ∇ ) A ( A · ∇ ) B B ( · A) + A( · B )

This looks like a good time for the BAC–CAB rule. However, we have to be careful since has both derivative and vector properties. As a derivative, it operates on both A and B . Therefore, by the product rule of differentiation, we can write

× ( A × B ) = × ( A × B ) + × ( A × B )

where the arrows indicate where the derivative is acting. Now that we have specified exactly where the derivative goes, we can treat as a vector. Using the BAC–CAB rule (once for each term) gives

× ( A × B ) = A( · B ) B ( · A) + A( · B ) B ( · A)

(2)

The first and last terms on the right hand side are ‘backwards’. However, we can turn them around. For example

A( · B ) = A( B · ∇ ) = B · ∇ ) A

(

With all the arrows in the right place [after flipping the first and last terms in (2)], we find simply

× ( A × B ) = ( B · ∇ ) A B ( · A) + A( · B ) ( A · ∇ ) B

which is what we set out to prove.

1.8.16 An electric dipole of moment p is located at the origin. The dipole creates an electric potential at r given by

ψ

( r

) = p · r

4

π o r 3

Find the electric field, E = − ∇ ψ at r .

We first use the quotient rule to write

E = − ∇ ψ = 4π 1 0

p · r r

3

= 4π 1 0

r 3 (p · r ) (p · r ) (r 3 )

r 6

Applying the chain rule to the second term in the numerator, we obtain

E = 4π 1 0

r 3 (p · r ) 3r 2 (p · r ) (r )

r 6

We now evaluate the two separate gradients

and

i p j x ∂x j

i

(p · r ) = xˆ i x i (p j x j ) = x

= xˆ i p j δ ij = xˆ i p i = p

r = xˆ i

∂x i x 2

1

+ x 2 2 + x 3 = xˆ i

2

1

2 x 2 + x 2 2 + x

1

2

3

2 x i = xˆ i x i

r

= r

r

= rˆ

Hence

E = 4π 1 0

r 3 p 3r 2 (p · r r

r

6

= 4π 1 0

3(p · rˆ)ˆr

p

r

3

Note that we have used the fact that p is a constant, although this was never stated in the problem.

1.9.12 Show that any solution of the equation

× ( × A) k 2 A = 0

automatically satisfies the vector Helmholtz equation

2 A + k 2 A = 0

and the solenoidal condition

· A = 0

We actually follow the hint and demonstrate the solenoidal condition first. Taking the divergence of the first equation, we find

· ∇ × ( × A) k 2 · A = 0

However, the divergence of a curl vanishes identically. Hence the first term is automatically equal to zero, and we are left with k 2 · A = 0 or (upon dividing by the constant k ) ∇ · A = 0.

We now return to the first equation and simplify the double curl using the BAC– CAB rule (taking into account the fact that all derivatives must act on A)

× ( × A) = ( · A) − ∇ 2 A

(3)

As a result, the first equation becomes

( · A) − ∇ 2 A k 2 A = 0

However, we have shown above that ∇· A = 0 for this problem. Thus (3) reduces to

2 A + k 2 A = 0

which is what we wanted to show.

1.10.4

Evaluate r · d r

We have evaluated this integral in class. For a line integral from point 1 to point 2, we have

2

1

r · d r =

1

2 2

1

d (r 2 ) =

2 1 r 2 1 2 =

1

2 r

2

2

1

1 2 r 2

However for a closed path, point 1 and point 2 are the same. Thus the integral along a closed loop vanishes, r · d r = 0. Note that this vanishing of the line integral around a closed loop is the sign of a conservative force.

Alternatively, we can apply Stokes’ theorem

r · d r

=

S

∇ × r · d σ

It is easy to see that r is curl-free. Hence the surface integral on the right hand side vanishes.

1.12.9 Prove that

u

v · d λ = v u · d λ

This is an application of Stokes’ theorem. Let us write

(u v + v u ) · d λ

We now expand the curl using

= S

× (u v + v u ) · d σ

(4)

× (u v ) = ( u ) × ( v ) + u × ∇ v = ( u ) × ( v )

where we have also used the fact that the curl of a gradient vanishes. Returning to (4), this indicates that

(u v + v u ) · d λ = S [( u ) × ( v ) + ( v ) × ( u )] · d σ = 0

where the vanishing of the right hand side is guaranteed by the antisymmetry of the cross-product, A × B = B × A.

Chapter 2

2.4.8 Find the circular cylindrical components of the velocity and acceleration of a moving particle

We first explore the time derivatives of the cylindrical coordinate basis vectors. Since

ρˆ = (cos ϕ, sin ϕ, 0),

ϕˆ = (sin ϕ, cos ϕ, 0),

zˆ = (0 , 0 , 1)

their derivatives are

ρˆ

ϕ = (sin ϕ, cos ϕ, 0) = ϕ,ˆ

Using the chain rule, this indicates that

ρˆ

˙

ρˆ = ϕ ϕ˙ = ϕˆϕ,˙

∂ϕ ϕˆ = (cos ϕ, sin ϕ, 0) = ρˆ

ϕˆ = ϕˆ ϕ˙ = ρˆϕ˙

˙

∂ϕ

(5)

Now, we note that the position vector is given by

r = ρρˆ + zzˆ

So all we have to do to find the velocity is to take a time derivative

˙

˙

˙

v = r = ρˆρ˙ + zˆz˙ + ρρˆ + zzˆ = ρˆρ˙ + zˆz˙ + ϕρˆ ϕ˙

˙

Note that we have used the expression for ρˆ in (5). Taking one more time deriva-

tive yields the acceleration

˙

˙

˙

˙

a = v = ρˆρ¨ + zˆz¨ + ϕˆ(ρ ϕ¨ + ρ˙ ϕ˙ ) + ρˆρ˙ + zˆz˙ + ϕρˆ ϕ˙

= ρˆρ¨ + zˆz¨ + ϕˆ(ρ ϕ¨ + ρ˙ ϕ˙ ) + ϕˆρ˙ ϕ˙ ρρˆ ϕ˙ 2

= ρˆ(¨ρ ρ ϕ˙ 2 ) + zˆz¨ + ϕˆ(ρ ϕ¨ + 2ρ˙ ϕ˙ )

2.4.11 For the flow of an incompressible viscous fluid the Navier-Stokes equations lead to

− ∇ × ( v × ( ×