Sei sulla pagina 1di 4

EPGY Summer Institute

Quantum Mechanics: 2009


Quantum Mechanics: Problems Lecture 4
SOLUTIONS

Physics Problems

4.1: Polarization of EM waves


The following figure shows an electromagnetic wave (E-field only) traveling in the z direction that is linearly polarized in a
direction that is at an angle θ to the x-axis.
x x

~0
E

θ
θ
z y

y Wave is traveling
out of the page
~0
E

~ 0 ) is in the
Consider an electromagnetic wave that is traveling in the z direction, so that the polarization (direction of E
x-y plane. For light that is linearly polarized in the x direction we can write the electric component of the EM wave as,
~ = îE0 cos(kz − ωt).
E

a. Write the electric component of an EM wave that is polarized 45o to the x axis (see figure above).

~ î + ĵ
E = (î + ĵ)E0′ cos(kz − ωt) or √ E0 cos(kz − ωt)
2

b. Write the electric component of an EM wave for which the polarization rotates about the x-y plane in a circular manner.
The direction being found by the right hand rule (right thumb in direction of travel of the EM wave fingers curling
~ It may help to draw it out first and examine the components as they change. This
in the direction of motion of E).
type of light is called right circularly polarized. [Hint: A first thought might be to write the amplitude E0 as a time
dependent quantity. This does not work as the result no longer satisfies Maxwell’s wave equation. Also, consider the
time development of E ~ at the location z = 0, this will simplify the analysis (e.g. the linearly polarized solution is then
E0 cos(−ωt), then you can easily extend this to all z by reinserting the kz term).] To get started, fill in the diagrams
below, indicating the direction of the vector E ~ at the position z = 0 over one period of time, take E ~ to point in the x
direction at time t = 0.
Solution:
At some time, say t = 0 the polarization will point in the î direction. One quarter of the period later it will point in
the −ĵ direction. Another quarter gives −î, yet another quarter gives ĵ, and after a full period it must return to its
starting point. If you draw this out, the x component is 90o (or π2 ) ahead of the y component. That is, at time t = 0,
the x component is maximum, which leads to a cosine, while the y component is 0, which implies it is a sine function.
Notice that the y component always “chases” the x component around (it is always π2 behind).
Note this is looking at the wave coming at you, thus the component rotates in the clockwise direction from this view.
This stems from the fact that the sine component at z = 0 is sin(−ωt) = −sinωt.
Thus, we can write,

~ z=0 π
E| = îE0 cos(−ωt) + ĵE0 sin(−ωt) = îE0 cos(−ωt) + ĵE0 cos(−ωt + )
2
~ π
E = îE0 cos(kz − ωt) + ĵE0 sin(kz − ωt) = îE0 cos(kz − ωt) + ĵE0 cos(kz − ωt + )
2
y y y y

T T 3T
t=0 t= 4 t= 2 t= 4

ĵ ĵ ĵ ĵ
î î î î
~
E x x x x
Wave is traveling
out of the page

c. Write the corresponding expression for left circularly polarized light, which rotates in the opposite direction.

~ π
E = îE0 cos(kz − ωt) − ĵE0 sin(kz − ωt) = îE0 cos(kz − ωt) − ĵE0 cos(kz − ωt + )
2

This problem will be continued on the next assignment (expressing the result in terms of complex exponentials). We will be
discussing circularly polarized light (although a quantum account of such) next week.

Problem 4.2: Independence, correlation and covariance.


a) For any number of random variables, show that if they are independent then the covariance and correlation of any pair
must vanish. You may concentrate on two state random variables (with +1, -1 outcomes) for simplicity.
Independence requires that the joint probability factorizes, i.e. p(X, Y, Z) = p(X)p(Y )p(Z). The second moment is
found,
XX XX
< XY > = xi yj p(X = xi , Y = yj ) = xi yj p(X = xi )p(Y = yj ) =< X >< Y >
i j i j
cov(XY ) = < XY > − < X >< Y >=< X >< Y > − < X >< Y >= 0
cov(XY )
corr(XY ) = =0 (1)
var(X)var(Y )
x
Independence: −→ cov = corr = 0.
b) Does the converse hold? That is, if a number of random variables have vanishing covariance and correlation are they
necessarily independent? Demonstrate why or why not with any choice of random variables you choose but three ±1
random variables, X, Y, Z can be used easily to demonstrate this.
Independence ? ←− ? cov = corr = 0

There is more than one solution to this problem. To approach it, simplify the problem by restricting the random variables.
Here we will choose 3 random variables X, Y, Z.
1. All three have outcomes that are either +1 or -1.
2. The mean, or expectation, for each is zero. I.e. < X >=< Y >=< Z >= 0.
3. Consider a subset of all 8 outcomes and have them to be equal probability (try 4 where each has probability 41 ).
4. Examine the covariance instead of correlation since it is easier to work with. If cov = 0 → corr = 0.
5. Remember that the covariance (and correlation) is in relation to only 2 random variables. That means there are 3
covariances we want to vanish: cov(XY ), cov(XZ), cov(Y Z). We will just try to find the case where the first vanishes.
6. We will be concerned with the marginal probabilities. That is, we will be concerned with probabilities for only two of
the three variables at a time. Thus we must some over the third variable not under consideration. E.g. P (X = +1, Y =
−1) = P (X = +1, Y = −1, Z = +1) + P (X = +1, Y = −1, Z = −1).
Now we can write the general form for the covariance.
The following notation will be used: p+−+ = P (X = +1 & Y = −1 & Z = +1). The goal is to find a joint probability
distribution (set of p··· s) that can not be factored into a form P (X)P (Y )P (Z) for every outcome.
The covariance, which we want to vanish is then,

cov(XY ) = 0 = (+1 × +1)(p+++ + p++− ) + (−1 × −1)(p−−+ + p−−− )


+(+1 × −1)(p+−+ + p+−− ) + (−1 × +1)(p−++ + p−+− )
= [p+++ + p++− + p−−+ + p−−− ] − [p+−+ + p+−− + p−++ + p−+− ]

Written this way we see that we just need to make the two terms in brackets equal. Thus, we will choose 2 from each and
have the rest vanish. We want to choose probabilities that can not be factored. It is actually quite easy to find a set of 4 if
you assume that it does factor and then examine the vanishing and non-vanishing terms. E.g. if you set the two probabilities
where they are all +1 and all -1, then, if it factors, none of the individual probabilities can vanish. I.e.,
1 1 1
If p+++ = & p−−− = then stat. indep. −→ p+ (X)p+ (Y )p+ (Z) = p− (X)p− (Y )p− (Z) = → all p+ & p− 6= 0
4 4 4
Now we just have to set 4 of the probabilities to zero to contradict this previous statement and have a statistically dependent
joint probability distribution. It is easy to choose these probabilities that make this covariance vanish. E.g.
1
p+++ = p−−− = p−+− = p−++ = all other probabilities vanish (2)
4
The only problem with this solution is that the other two covariances do not vanish. It takes just a little exploring and a
refined argument of statistical dependence to come up with a set that has all three vanishing. Thus consider,
1
p+++ = p−−+ = p+−− = p−+− = all other probabilities vanish (3)
4
To check statistical dependence note that if it were to be factored then a similar argument shows that all p+ 6= 0 and the
second term says that p− (X), p− (Y ) 6= 0 and the third term implies that p− (Z) 6= 0. Thus, as before, all terms must be
non-zero to be statistically independent however this is not the case.
You can also quickly check that the other two covariances vanish.

cov(XZ) = +p+++ − p−−+ − p+−− + p−+− = 0


cov(Y Z) = +p+++ − p−−+ + p+−− − p−+− = 0

Though all individual expectation values, covariances, and correlations vanish, it turns out however, the third moment
does not vanish,

< XY Z > = (+1)(+1)(+1)p+++ + (−1)(+1)(−1)p−+− + (+1)(−1)(−1)p+−− + (−1)(−1)(+1)p−−+ = 1 (4)

This indicates that there is still some dependence among these random variables. (Note, a proper definition of the third
moment would include many other terms involving the expectation values of the individual random variables, however these
all vanish so we only have the first, three-term, product.

1
Problem 4.3: Three spin 2 objects
Considering the case of three two state variables X, Y , and Z (with outcomes ±1 , in physics parlance they are called spin
1/2 objects) determine whether a joint probability distribution exists for the following two cases. Give an explicit state (list
out the jpd) if one exists and state clearly why one does not exists if so.

System 1) corr(XY ) = corr(XZ) = corr(Y Z) = +1.


System 2) corr(XY ) = corr(XZ) = +1, corr(Y Z) = −1.
You can do this problem by inspection, that is, write down the jpd that must satisfy the three conditions. It is highly
recommended to use your own 2 state variables, coins, to see what outcomes that satisfy the conditions above for the two
systems. If you can make a brief cogent argument, do so.
By just using three coins and calling heads = +1 and tails = -1 you can quickly see that the first is easy to satisfy (either
all three are heads or all three are tails).
For the second distribution there is no way to satisfy the three correlations. Since X and Y are correlated and X and Z
are correlated we expect that Y and Z to be positively correlated
Problem 4.4: Practice with Vectors
~ has the following components in the xyz basis (with unit vectors x̂, ŷ, and ẑ).
A vector A

~ = (3, 4, 5) = 3x̂ + 4ŷ + 5ẑ


A

~ ·A
a. what is the length of this vector? A ~ = 9 + 16 + 25 = 50. This is the squared length. The actual length is |A|
~ = 50.

Consider another basis, the x′ y ′ z ′ basis where; z ′ = z and the axes are rotated, counterclockwise, about the z axis by 45o .
~ in this basis? What is its length?
b. what are the components of A
~ = x̂′ + 2yˆ′ + 3zˆ′ .
~ is defined by its components in the S ′ basis, i.e. B
A second vector B
~ · B?
c. what is the product A ~

Problem 4.5: Orthonormal vectors


Consider the following set of vectors. We define the term orthonormal for those pairs which are orthogonal (perpendicular)
and normalized (unit length). We begin with the orthonormal set of unit vectors in flat, 3-dimensional space, î, ĵ, and k̂.
Consider the following set of vectors
a. ~va = √1 (−î + ĵ)
3

b. ~vb = 12 (î − ĵ + 2k̂)

c. ~vc = √1 (î − ĵ − k̂)


3

d. ~vd = 14 (î + ĵ + 2k̂)

e. ~ve = √1 (î + ĵ)


2

f. ~vf = √1 (î + ĵ + k̂)


3

g. ~vg = 12 (î − ĵ − 2k̂)

h. ~vh = 14 (î + ĵ − 2k̂)


Of the previous set of vectors, can you find a set of three that form an orthonormal basis? That is, three vectors that are
mutually perpendicular? If we have such a set of orthonormal vectors, which span the space (need a number equal to the
dimensions of the space) then any vector can be expressed as a linear combination of these.