Sei sulla pagina 1di 5

PHYS 102, Fall 2016

Homework #2 Solutions
Bennett Marsh
September 29, 2016

Boas Chapter 3
p.130, #13.

(a) Let f (x), g(x) be twice-differentiable functions, k be a real number, and O = (D2 +
2D + 1) be the operator in question. We check the conditions for linearity:
O(f + g) = (f + g)00 + 2(f + g)0 + (f + g)
= f 00 + g 00 + 2f 0 + 2g 0 + f + g
= (f 00 + 2f 0 + f ) + (g 00 + 2g 0 + g)
= O(f ) + O(g)
O(kf ) = (kf )00 + 2(kf )0 + (kf )
= k(f 00 + 2f 0 + f )
= kO(f )
Hence, D2 + 2D + 1 is linear.
(b) This time letting O = x2 D2 2xD + 7, we check again:
O(f + g) = x2 (f + g)00 2x(f + g)0 + 7(f + g)
= x2 f 00 + x2 g 00 2xf 0 2xg 0 + 7f + 7g
= (x2 f 00 2xf 0 + 7f ) + (x2 g 00 2xg 0 + 7g)
= O(f ) + O(g)
O(kf ) = x2 (kf )00 2x(kf )0 + 7(kf )
= k(x2 f 00 2xf 0 + 7f )
= kO(f )
Hence, x2 D2 2xD + 7 is linear.

p.130, #14. The operator in question is O(f ) = max f (i.e. returns the maximum of the function over the
real line). We will show that this is not a linear operator by providing a counterexample
to the fact that for any linear operator, O(f ) = O(f ).
Consider the unit step function (x), where (x) = 1 for x 0, and (x) = 0 for x < 0.
Clearly, max = 1. But max() = 0, so
O() = 0 6= 1 = O().
This proves that taking the maximum is not a linear operation.
p.184, #7.

The set in question is a subset of the set of all polynomials of degree 7, which we know to
be a vector space (see Example 1 in Boas). To check if this smaller set is also a vector space,
we need only check if it is closed under addition and scalar multiplication (it inherits the rest
of the properties from the larger space).

First, we write down two general elements in the set:


f (x) = a0 (1 + x2 + x4 + x6 ) + a1 (x + x3 + x5 + x7 )
g(x) = b0 (1 + x2 + x4 + x6 ) + b1 (x + x3 + x5 + x7 )
Their sum
(f + g)(x) = (a0 + b0 )(1 + x2 + x4 + x6 ) + (a1 + b1 )(x + x3 + x5 + x7 )
is also in the set, so the set is closed under addition. The product of f with a scalar k
(kf )(x) = ka0 (1 + x2 + x4 + x6 ) + ka1 (x + x3 + x5 + x7 )
is also in the set, so the set is closed under scalar multiplication.
This shows that the set is in fact a vector space. It is clear that the following 2 vectors span
the space, and are independent:
{1 + x2 + x4 + x6 , x + x3 + x5 + x7 }
Hence, this is a basis and the space is 2-dimensional
p.184, #8.

Similar to above, this is a subset of a larger vector space so we need only check closure under
addition and scalar multiplication. We first write down 2 general elements of the set
f (x) = a0 + a2 x2 + a4 x4 + a6 x6
g(x) = b0 + b2 x2 + b4 x4 + b6 x6
Their sum
(f + g)(x) = (a0 + b0 ) + (a2 + b2 )x2 + (a4 + b4 )x4 + (a6 + b6 )x6
is also in the set, so the set is closed under addition. The product of f with a scalar k
(kf )(x) = ka0 + ka2 x2 + ka4 x4 + ka6 x6
is also in the set, so the set is closed under scalar multiplication.
This shows that the set is in fact a vector space. It is clear that the following 4 vectors span
the space, and are independent:
{1, x2 , x4 , x6 }
Hence, this is a basis and the space is 4-dimensional

Shankar QM Chapter 1
1.1.1

(a) Assume that |0i and |00 i both have the properties of an additive identity. That is,
|V i + |0i = |V i
|W i + |00 i = |W i
for any vectors |V i, |W i. If we can show that |0i = |00 i, then we will have proved that the
additive identity is unique.

Since |V i, |W i are arbitrary in the above equations, we are free to plug in |V i = |00 i in the top
equation, and |W i = |0i in the bottom equation. This gives
|00 i + |0i = |00 i
|0i + |00 i = |0i.
Now, since we are in a vector space, addition is commutative, so the left-hand sides are equal.
This necessarily implies that the right-hand sides are equal, so |0i = |00 i. This proves that there
is exactly one additive identity in any vector space.
Note that if the zero vector were not unique, it would not even make sense to talk about additive
inverses. Does |V i + |V i equal |0i, or |00 i? This means that the proof for this part shouldnt
make use of additive inverses. We cant set |W i = |V i and subtract the equations, as some
people suggested. We also havent yet proved that |V i = |V i, or even that |V i |V i = |0i.
(see (b) and (c) below).
(b) Using the properties of vector spaces, we have
|0i = |V i + |V i

(existence of additive inverse)

= (0 + 1)|V i + |V i

(1 = 0 + 1)

= 0|V i + (|V i + |V i)

(distibutive property + associative property)

= 0|V i + |0i

(property of additive inverse)

= 0|V i

(property of additive identity)

(c) We have
|V i + (|V i) = (1 1)|V i

(distributive)

= 0|V i
= |0i

(result of part (b))

Now add |V i to each side:


|V i + |V i + (|V i) = |V i + |0i
|0i + (|V i) = |0i + |V i
|V i = |V i

(definition of additive inverse)


(definition of additive identity)

(d) Suppose that |W i is also an additive inverse to |V i. Since |0i is unique,


|V i + |W i = |0i = |V i + |V i
|V i + |V i + |W i = |V i + |V i + |V i

(add |V i to each side)

|0i + |W i = |0i + |V i

(def. of additive inverse)

|W i = |V i

(def. of additive identity)

This proves that the additive inverse is unique.


Alternatively, we could have just used part (c) to say that |W i = |V i = |V i.

1.3.4 Starting from the definition of the norm,


kV + W k2 hV + W |V + W i
= (hV | + hW |)(|V i + |W i)
= hV |V i + hV |W i + hW |V i + hW |W i
= kV k2 + kW k2 + hV |W i + hV |W i

(skew-symmetry)

= kV k2 + kW k2 + 2RehV |W i

(z + z = 2Re z)

kV k2 + kW k2 + 2|hV |W i|
2

(Re z |z| for any complex number z)

kV k + kW k + 2kV kkW k
= (kV k + kW k)

(Cauchy-Schwarz inequality)

Taking the square root of each side, we arrive at the triangle inequality
kV + W k kV k + kW k.
The Cauchy-Schwarz inequality requires |V i = |W i, where is any complex number, to satisfy the
equality. The other inequality we made use of, RehV |W i |hV |W i|, requires hV |W i to be a positive
real number to satisfy the equality. Putting these conditions together, we see that hV |W i = kV k2
must be real and positive, so must be real and positive (unless of course |V i = |0i).

Additional Problems
A1.

(a) We check the 3 axioms:


1

Z
hf |gi =

f (x)g(x)dx =

Z

hf |f i =

f (x)g (x)dx

= hg|f i

f (x)f (x)dx =

|f (x)|2 dx 0

(= 0 iff f 0)

1
1

Z
hf |ag + bhi =

f (x)(ag + bh)(x)dx

f (x)g(x)dx + b

=a

f (x)h(x)dx

= ahf |gi + bhf |hi


(b) We compute
Z

kf k hf |f i =
kgk2 hg|gi =

1dx = 2
1
Z 1

4x4 dx =

kf + gk2 hf + g|f + gi =

8
5

(1 + 2ix2 )(1 2ix2 )dx =

hf |gi =
1

(1 + 4x4 )dx =

4
(1)(2ix2 )dx = i
3

We check Cauchy-Schwarz:
4
|hf |gi| = = 1.33 1.79 =
3

28
= kf kkgk
5

18
5

and triangle:
r
kf + gk =
A2.

18
= 1.90 2.68 = 2 +
5

8
= kf k + kgk.
5

(a) First, we can square the relation 2iC = [A, B] = AB BA


4C 2 = (AB BA)(AB BA)
= ABAB ABBA BAAB + BABA
We can rearrange each term, but each time we swap an A and a B we get an additional minus
sign, since AB = BA. So
4C 2 = ABAB ABBA BAAB + BABA
= AABB AABB AABB AABB
= 4A2 B 2
= 4I
Dividing by 4, we get the desired C 2 = I.
Similarly,
2i[B, C] = [B, AB BA]
= B(AB BA) (AB BA)B
= BAB BBA ABB + BAB
= B 2 A B 2 A B 2 A B 2 A
= 4A.
Dividing by 2i, we get [B, C] = 2iA. In an identical manner we also have [C, A] = 2iB.
(b) Using the results from above,
[[[A, B], [B, C]], [A, B]] = [[2iC, 2iA], 2iC]
= 8i[[C, A], C]
= 8i[2iB, C]
= 16[B, C]
= 32iA.

Potrebbero piacerti anche