Sei sulla pagina 1di 29

Music and Sound 12/29/2012 6:35:00 AM

Sound waves originate from vibrations and travel through a medium


Longitudinal – series of compressions and rarefactions – high/low
pressure
Distance between successive compressions = wavelength
Frequency = vibrations/second
Time period= frequency reciprocal
c=f * lambda
Higher the number of vibrations the higher the frequency

Human range of hearing – 20 to 20000 Hz


Below 20Hz – infrasound
Above 20kHz – ultrasound
Bats hear up to 110kHz
Humped back whales produce songs – 40 to 5000Hz – territorial and
mating calls
Frequency can be measured with an oscilloscope after converted into an
electrical signal by a transducer
Can measure period from the time base setting (a circuit which controls
how long it takes for the trace to cross the screen horizontally s/cm)
Tip: find mean to reduce uncertainty = more accuracy
Pitch = fundamental frequency + harmonics
Can detect same pitch from different instruments
Some pitches together create pleasant sensation = consonant/harmonic =
musical intervals
Frequency ratio 2:1 = octave, 3:2 = P5th, 4:3 = P4th, 5:4 = a third
The unique character of a sound is quality/timbre
Two instruments produce different wave forms
Timbre= harmonics, attack, decay, vibrato, tremolo
Harmonics are the included multiples of the fundamental frequency
Attack = how quickly it reaches its peak amp.
Decay = how quickly it dies away
Vibrato = periodic change in pitch
Tremolo = periodic change in amp – both give expression/variation
Harmonics can be analyzed by fourier analysis – shows different
amplitudes of harmonics
Harmonics can be synthesized by fourier synthesis – adding regular
alternating (sinusoidal) voltages to give a complex sound
Listening to Sounds 12/29/2012 6:35:00 AM

Peak amplitude = maximum displacement from rest position of one of the


particles of the medium
Amplitude is related to the air pressure
Can be represented as a graph showing displacement of all particles
against distance or how one particle moves as time changes (the second
is better as you can work out the frequency from the time period)
Light waves and sound waves are progressive waves that carry energy
away from the source without transferring any material
Sound is a mechanical wave. Light is an EM wave (varying magnetic and
electric field perpendicular to each other and direction of propagation -
transverse)
Both sound and light, reflect, refract, diffract and superpose but light can
be polarized
That restricts the direction of oscillation to one plane perpendicular to the
direction of propagation
Longitudinal waves can’t be polarized. Travel in the same direction as
the oscillation of the particles they travel through.

Experiment:
Use two microphones connected to a fast timer
A hammer strikes a metal plate generating sound
The near microphone picks up the sound first followed by the far one after
the sound has travelled an extra meter
The electrical pulses trigger the timer to start and stop
The speed is the extra distance/time taken
Remember to repeat, find mean, discard anomalous for accuracy

Surround sound of home theater assumes listener is equidistant from all


speakers, if not then there is a different delay from each speaker and
effect is spoiled
It can be corrected by balancing the system
Large orchestras and choirs have to watch the conductor instead of
listening
Echoes can add or subtract from the initial sounds = unpleasant noise
Concert halls use hanging clouds to improve acoustics
They reduce the time taken for the first echo. This reduces the feeling of
isolation.
Listening to sounds 12/29/2012 6:35:00 AM

Musical instruments use stationary waves although the eventual sound is a


travelling wave
So when a travelling wave is restricted for example on a string with fixed
ends, it reflects at each end = standing wave, no energy transmitted
When two waves meet their total displacement is the sum of their individual
displacements
After having met, the waves continue as if they
had never met
Can be demonstrated by putting a driving
oscillator at one end of the string and finding
harmonics

This causes the surrounding air to vibrate at the same frequency as the
device
String instruments have a stretched string
You pluck or bow (tension overcomes the friction between the hair and the
string) the string making it vibrate and setting up a standing wave
In a guitar the sting transfers its vibration to the body which vibrates as a
whole
The air inside the hollow body resonates at the same frequency as the stings
The top plate is thin and can vibrate easily
The fundamental frequency of a vibrating sting depends on three things:
The mass – more mass/unit length vibrates more slowly – less frequency
Tension – adjusted by tuning pegs – tighter = faster = higher pitch
The length of the string that can vibrate. Can be controlled between bridge
and fret board. Shortening the string reduces the wavelength giving a higher
frequency.

On strings, standing waves are formed by the superposition of two identical


waves travelling in opposite directions
Travelling waves travel from the point where the sting is plucked to the ends
of the string where then reflect and undergo a phase change of 180*
Nodes are points where the total displacement always remains 0
Anti-nodes are places where the displacement varies between opposite
maxima.
Strings have harmonics these are multiples of the fundamental frequency. In
a musical instrument the rich sound comes from several harmonics occurring
at the same time, and it is the balance of the amplitudes that determines the
timbre. at the fundamental frequency there is one loop on the string when
the fundamental frequency doubles you reach the second harmonic where
there are two loops on the string and the length of the string is equal to the
wavelength.
In wind instruments air is blown into a pipe and it leaves at the other. When
blowing through a reed it opens and closes the pipe. The pulses of air make
it vibrate in sympathy setting up a standing wave. In flutes air is blown is
blown at a sharp edge. The air above and below vibrates.. In brass the lips
vibrate against the rim.
In a closed pipe there is a displacement node at the seal and antinode at the
opening. There is a pressure node at the opening and antinode at the seal.
The displacement antinode is slightly beyond the end strictly speaking.

When two musical notes play together they will superpose. If they have
slightly different frequencies then they periodically go in and out of phase
leading to constructive and destructive interference. You get a regular rising
and falling of amplitude The beat frequency is equal to the difference in the
frequencies. You can use a tuning fork and adjust the frequency with the
tuning pegs until the beats disappear and the instrument is in tune.

Graphically when two waves meet the resultant displacement is the vector
sum of the individual displacements.
Two waves are in phase if they are both at the same point in their wave
cycle. They have the same displacement and velocity If waves come from the
same oscillator then they will be in phase.

You can reflect microwaves off a metal plate to set up a standing wave. Then
you can mind the nodes and antinodes by moving a probe between the
transmitter and the plate. Connect the probe to a loudspeaker/meter
Loud and Soft 12/29/2012 6:35:00 AM

Loudness is the energy transferred to the surroundings by the vibration.


Energy and therefore intensity is proportional to amplitude squared.
Loudness is the listener’s perception of the intensity of sound
𝑃 𝐸
Form a point source, 𝐼 = 4𝜋𝑟 2 𝑃= 𝑡

So intensity has an inverse square relationship with distance from the


point source

The threshold of hearing is 1x10^-12 W/m^2


A pressure change of 2x10^-5 Pa (Nm)
A logarithmic scale is used to measure intensity
The threshold of hearing is 0 dB
A sound 10x louder is 10dB and 20x louder is 100dB
3dB increase is double, 3dB decrease is half loudness/intensity
Sensitivity is the smallest change that can be detected
Intensity is objective and can be measures
Loudness depends on the ear and is subjective
Age reduces the effectiveness of the ears, as does listening to loud music
for extended periods of time
The ear is most sensitive to sounds between 1 to 4kHz
It tends to amplify sounds of this frequency as it is associated with the
resonance of the auditory canal
Noise is random and persistent variation in a signal
It can be reduced by reducing it at the source, absorbing it or masking it
Noise cancellation generates sound in anti-phase to the original sound
Recording and Playback 12/29/2012 6:35:00 AM

Recording – Microphones (an input transducer, sound makes the


membrane vibrate producing weak electrical signals whose voltage
mimics the displacement of air particles, this is amplified – signal strength
increased using current from amplifier’s power supply) – Mixer (Balances
the relative strengths of signals from multiple microphones/guitar
pickups) – Recorder (record head is an electromagnet, intensity varies as
current does, permanently magnetises ferric oxide on the recording tape)

Playback – Playback (tape’s magnetic field induces a current in the head)


– Amplification (the current is amplified) – Loudspeaker (electrical signal
causes mechanical vibrations in the speaker cone)

Digital Signal – have only two values, information is coded in binary in


pulses of voltage or light
Analogue Signal –varies continuously with time e.g. voltage/light intensity

Audio signals (analogue) can be converted into digital signals by sampling


the signal voltage at regular intervals and the measurements are
converted into a series of binary number (quantisation into pulse code)
This is done by an ADC (PCM). The signal is sent to a DSP which analyses
the signal and encodes it using a compression algorithm (codec –
compression-decompression – MP3, WMA). Upon playback it is
decompressed and converted into an analogue signal by a DAC (PCDM).
The minimum sampling frequency must be at least twice the maximum
frequency of the regular signal (baseband). If not then false sounds can
be produced – aliasing

Quantisation error is due to the fact that a range of analogue values are
represented by one digital value. This can be reduced by increasing the
number of quantisation levels by using more bits. The most significant bit
is the parity bit 0 is negative voltage, 1 is positive voltage. To increase
the fidelity increase sampling rate and quantisation levels – up bitrate

Digital data can be compressed. This is done by eliminating redundant


data. Lossy compression removes small discrepancies and masked quiet
frequencies especially if masked by a frequency that the ear is more
sensitive to. Can also use predictive coding where only the difference
between samples is transmitted to produce the second sample. This is
often used in TV where most of the picture remains unchanged frame to
frame. The advantages of this are that data can be transmitted much
more quickly and less space is needed to store it. However it may result
in a notable decrease in sound quality.

Transmitted signals are also susceptible to random noise (irritates


listeners and makes communication difficult) that can be removed easily
from a digital signal using a filter. A computer can be programmed to
regenerate the original signal by performing calculations on the sampled
values. High pass filters removes low frequencies and feeds to a tweeter
(treble speaker). Low pass filters remove high frequencies and feed to
subwoofers (bass speakers).

Advantages of digital signals: noise can be removed easily, can be copied


without loss of quality, can be stored easily, can be encrypted, can be
compressed and can be shared between devices.
Disadvantages of different signals: Need to convert pulse code to
analogue otherwise can’t be seen on a monitor or heard from a speaker.
Also can give a poor quality signal with low bitrate – aliasing

To obtain a pulse code modulation from an analogue waveform the signal


is sampled at regular time intervals and then quantized. The output is a
series of binary numbers which are transmitted in sequence. At the
receiver a pulse code demodulator converts the binary numbers back into
pulses having the same quantum levels at those in the modulator. These
are further processed to restore the original analogue waveform.
CD and DVD 12/29/2012 6:35:00 AM

CDs use laser light of wavelength 500nm. They have a spiral track 0.5
microns wide with a spacing of 1.6 microns. The track has a series of
bumps and lands which represent binary data. The bumps are ¼
wavelength high so light hitting the land travels ½ wavelengths further
than that reflected off the bump and so the path difference makes it 180
out of phase so there is destructive interference so the sensor doesn’t
detect anything = binary 0. There is no destructive interference from the
land so that is sensed as a binary 1.
The laser is kept on track by a tracking mechanism. The diffraction
grating produces 3 beams. The central beam reads the data, the two first
orders are side beams that reflect from the CD surface, they should have
equal intensities on average, if not then an error signal is generated to
correct the tracking. The disc is rotated by a motor, faster in the center
and slower at the edges so data is read at the same rate..
DVDs are similar but use tighter spirals and shorter bumps due to shorter
wavelength light. They also have a more efficient tracking system. They
can also have two layers so their capacity is much greater. HD DVDs use
two layers of dye and different coloured lasers. Are read from the top and
bottom of the same disc.
During writing a laser burns the dye to create a non reflecting layer, this
is then read by a less powerful laser. Rewritable discs have a special
compound that can be remelted so pits can be filled in again.
In youngs double slit experiment a coherent light source (same f and
lambda and a fixed phase difference) diffract and produce an interference
pattern on a screen with a bright central maximum surrounded by
alternating bright and dark fringes. The narrower the slit separation the
wider the pattern. The fringes are formed by superposition. Where there
is a path difference of a whole number of wavelengths there the waves
are in phase and there is constructive interference, if there is a path
difference of an odd number of wavelengths then the waves are 180 out
𝜆𝐷
of phase so there is destructive interference 𝑊 = 𝑠

Coherent sources are necessary for an interference pattern this is done in


this experiment as a double slit is used to diffract light from the same
source into two overlapping beams.
When using a diffraction grating then 𝑛𝜆 = 𝑑𝑠𝑖𝑛𝜃.
Grating spacing is the inverse of lines/metre
EM Communication 12/29/2012 6:35:00 AM

Infrared of 1 to 20mm is absorbed by water O2 and CO2


Ultraviolet is absorbed by the ozone layer
X ray and gamma are absorbed by gas molecules
Only, some radio, visible light, near ultraviolet, near and far infrared can
pass through the atmosphere. The atmosphere is effectively opaque to all
other wavelengths.
Additionally the atmosphere absorbs wavelengths less than 2 cm and the
ionosphere reflects wavelengths more than a few metres so only
microwaves can get through

Wireless communications rely on carrier waves, these are high frequency


electromagnetic waves. The higher the frequency, the more (low
frequency) information it can hold.
Information is encoded onto a signal by a process called modulation.
Once received it is demodulated by removing the carrier wave.

Transmission – input transducer – modulator – amplifier – transmitter


Amplitude modulation is done for LW and MW. When the information is a
loud sound then the output signal has greater amplitude variation, and
when it is a quiet sound then it has smaller amplitude variation.
Frequency modulation is where the frequency of the carrier is changed
slightly. The amplitude is represented by the overall change in frequency
above and below the carrier and the number of times it changes between
these limits every second represents the frequency

Time division multiplexing is used to send several digital signals along the
same transmission path. The signal from each device is split up into a
packet of bits by a transducer which sends them is sequence. Upon
reception, they are reassembled by a demultiplexer and then passed on
to the appropriate device.
Radio Transmission 12/29/2012 6:35:00 AM

There are several transmission paths which allow radio to be transmitted


to a receiver. The transmission path used depends on the
frequency/wavelength of the radio wave. At VHF you need line-of-sight
transmission whereas for HF ground waves and sky waves are used.
Remember, the higher the frequency the more information they can
carry.

Ground waves are of two types – surface waves and space waves.
Surface waves are used with waves of wavelength typically around 1 km
these reach the receiver by diffracting around the surface of the earth and
due to their large wavelength they can also diffract around objects
smaller than their wavelength such as hills and buildings. However there
is a slight shadow behind these. The fact that it diffracts gives it a long
range and one transmitter can cover a whole country. However the
strength of the surface wave attenuates by inducing a voltage in the
earth’s surface, this is minimized by a vertical dipole aerial – so the wave
is vertically polarized – electric vertical and magnetic horizontal – this
reduces the contact of the electric field with the surface of the earth.
Space waves are used for TV/FM (VHF) radio. They have wavelengths of a
few metres and can’t diffract around hills or large buildings. These only
travel through line of sight or – due to the density and therefore
refractive index variation of the atmosphere – can refract giving reception
15% beyond the visual horizon (radio horizon)

Sky waves can also be used for transmission. When EM waves are sent
towards the ionosphere (ionized atoms 90-300km above Earth) then it is
refracted by the ionosphere. Longer wavelengths (HF and MW) travel
slower and are refracted more so they are totally internally reflected, this
leaves an area where nothing can be received – a skip zone, the waves
can be reflected bath and forth leaving several skip zones. Whether it is
reflected depends on the frequency, the power of the transmitter, the
angle of incidence and the level of ionization, which varies from time to
time. On the other hand shorter wavelengths have more energy and so
are faster, these are refracted less and can pass through the ionosphere
and so can be use for satellite communications.
AM is broadcast on LW (200kHz), MW (1000kHz) and SW (10MHz). LW
and MW have a channel bandwidth of 9kHz and it is 5kHz for SW. The
channel bandwidth is twice the baseband bandwidth. AM is quite
susceptible to EM interference which is difficult to filter so it is used for
speech where fidelity doesn’t need to be too high. Additionally at night
the surface waves behave as sky waves and travel much further so the
transmitters must reduce their power or use directional aerials to avoid
interference.
FM is much less susceptible to noise which affects the amplitude of
waves. This gives far better fidelity. It is usually used to VHF. It has a
greater bandwidth (200kHz radio/6MHz TV) so can carry much more
information, allowing for stereo transmission.. However it can be
diffracted by small objects and reflected causing multipath interference.
Radio transmission uses frequency division multiplexing so there is a limit
to the number of transmissions that can coexist as the bandwidth is
limited. This is controlled by the government. Copper cables/optical fibres
don’t have the same bandwidth problems as bandwidth can be expanded
by installing more cable.

One thing to bear note of is that radio signals are generated by


accelerating electrons along a transmission aerial which is a half
wavelength long. The AC sets up a standing wave which is polarized. The
receiver must be aligned with the electric field to receive the signal.

Recently many people have begun to use DAB which superimposes a


digital signal onto the carrier wave. A multiplexer also adds additional
data such as text and images. They use a multiplex (TDM) with multiple
DAB services on the same carrier frequency so more stations can transmit
on less bandwidth. This is reassembled at the receiver giving CD quality
output. The interference can also be filtered easily so interference isn’t as
much of a problem as FM/AM
Satellites 12/29/2012 6:35:00 AM

Sky waves can also be used for satellite communications acting as a relay
with a UHF (4-20GHz) uplink and lower frequency downlink so that there
is less rain and atmospheric attenuation. Geosynchronous/stationary
orbitals are used 35800km high having a 24 hour period so they remain
fixed above a certain point of the Earth so dishes don’t need to change
direction. They have to be positioned carefully so that it doesn’t interfere
with neighboring satellites.

The power of EM waves diminish according to the inverse square law so


the transmitted beam must be narrow rather than omnidirectional. This is
done by putting the aerial dipole at the focus of a parabolic reflector dish.
This dish doesn’t need to be solid and can be a mesh as long as the holes
are smaller than the wavelength. At the receiving dish using a similar dish
gives a high gain – a stronger signal output. This can also be done by
increasing the diameter as the energy collected is proportional to area.

The satellite dish makes radiation spread out like when passing through a
𝜆
single slit with the diameter being equal to the aperture. 𝑆𝑖𝑛𝜃 = 𝑎 gives

the angle from the normal to the first minimum and so can be used to
calculate the satellite footprint which is the portion of the Earth over
which a satellite delivers a specified signal power. Using a small dish will
give a large footprint but with low intensity.

Loudspeakers diffract sound in a similar way and can be modeled by the


same equation. Thus stereo systems have loudspeakers with different
cone diameters that match with the wavelength they are producing so
that the minima are in similar places.

There are also more than 10000 pieces of space debris such as ejected
rockets and disused/broken up satellites in orbit. The UNOOSA has said
that states are liable to any damage caused by their space debris and that
contamination of space should be avoided.

Satellites are expensive to set up and maintain however they can relay
many messages at once, they are reliable and cover a large area with one
transmission.
Optics 12/29/2012 6:35:00 AM

Light can be though of as a ray, a very narrow beam demonstrating the


direction, it is perpendicular to the wavefront.
Rays incident on a material boundary are partially reflected, partially
refracted and partially absorbed, the total energy is conserved.
The angle of incidence = the angle of reflection
The ratio of the angle of incidence to the angle of refraction is constant
for the same materials, this is equal to the absolute refractive index of
the second material divided by the absolute refractive index of the first
material and is written as 1n2. The refractive index from substance 2 to
substance one is the inverse of 1n2.

Waves entering an optically denser substance slow down and their


wavelength decreases as the frequency stays constant. The absolute
𝑐
refractive index is related to this quality 𝑛𝑠 = 𝑐 . The refractive index to a
𝑠

vacuum is the inverse of the absolute refractive index.

If a ray enters an optically dense substance at an angle then it refracts


awa from the normal. If the angle of incidence is great enough then the
angle of refraction will be 90∘ . This angle is the critical angle. When the
angle of incidence is greater than the critical angle then the ray is totally
𝑛
internally reflected so there is no refracted ray. 𝑠𝑖𝑛𝜃2 = 1 so 𝑠𝑖𝑛𝜃𝑐 = 𝑛2. Air
1

has an absolute refractive index of approximately 1 so when a ray travels


1
towards an air boundary 𝑠𝑖𝑛𝜃𝑐 = 𝑛
𝑠

Optical fibres are threads of glass/plastic that carry light or infrared with
minimal attenuation using TIR. A pulse of light is transmitted at one end
and received at the other. The core is surrounded by a transparent
cladding of a lower refractive index. Any rays hitting the cladding at a
angle more than the critical angle will be totally internally reflected.
Around that there is a plastic sheath that strengthens and prevents
scratches that could leak light. It also ensures that light doesn’t pass from
one fibre to another if there is contact as that would lead to very insecure
communication. Monochromatic light is used to avoid dispersion as
different wavelengths travel at different speeds when not in a vacuum.
There is also the problem of multipath dispersion as rays travelling in the
centre travel shorter distances than those reflected so the
bitrate/bandwidth per km (pulses recognizable as separate per second) is
reduced. This can be compensated for by making the cores thinner or by
using a graded index fibre in which the refractive index is gradually
reduced from the centre to the edges. The light near the edges therefore
travels faster over a longer distance so the pulse is not as spread out this
allows a bandwidth of 1 GHz per km. the rays also curve due to
refraction. There is also interference as only certain angles of incidence
called modes allow rays to be transmitted by constructive interference,
other angles cause destructive interference.

As the signal travels it weakens due to absorption by the glass molecules


as they match the natural frequencies. By scattering which is where the
beam is scattered in all directions by certain molecules so there is less
energy in the forward direction. This can be compensated for by regular
repeaters which converts the signal into an electrical signal which is fed to
a transmitter which repeats it at a higher intensity.

Optical fibres are usually used when high bandwidth or long distance
communication is required, the are also much lighter. They have low
losses so there can be large distances between repeaters and they can’t
induce signals in one another so there is no cross talk (as in copper
cables), it is also safe near high voltage equipment as it has high electric
resistance and is also very secure. However they are expensive and not
easy to join together. Copper cables are used for short distances as they
are very cheap and many are already in place. They are easier to join
together and can carry electrical power as well.
Atoms 12/29/2012 6:35:00 AM

Models are simplified pictures of what is physically happening. They are


created to explain observations and phenomena. They help to predict the
outcome of other experiments.

There have been many models in the past to try to explain what matter is
made from. Democritus proposed the existence of small indivisible particles
called atoms. He postulated that these discrete particles gave matter its
properties. On the other hand, alchemists were keen to assert that matter
was made up of four key elements – earth, fire, air & water.

In the 17th Century Newton suggested that new particles arise when particles
reform and John Dalton showed that the mass of products is that same as
the mass of reactants. Proust found the law of constant composition which
said that the elements in a compound are in the same ratio of mass,
regardless of quantity. Dalton said, in his law of multiple proportions, that
when elements combine to form multiple compounds then the mass of one
that combines with a fixed mass of the others to produce the different
compounds are in simple whole number ratios. These two laws were the
experimental evidence for atoms as they suggested that at a very small level
there were discrete entities that could not be broken into parts.
Other evidence also pointed at the existence of atoms and molecules:
When bromine vapour from a phial is released into an evacuated gas
diffusion tube then it quickly fills up. If there is air then it diffuses slowly as
collisions with air molecules hinder its progress. Diffusion can’t be explained
if air is a homogenous mass however moving particles allow for diffusion.
The speed shows how fast atoms move and this also shows that there are
spaces between gas molecules for other molecules to move past one
another.

Robert Brown also noticed random movement of pollen suspended in water


similar to smoke particles in air. This can be explained using the atomic
model. The particles bombard the smoke and they exert a force due to their
momentum. These are random collisions so there is a resultant force at times
when one side is hot more than the other. Thus the smoke continually
changes direction producing a jerky movement.
To prodice this force imbalance the atoms need to be much smaller, their
radii range from 0.12nm to 0.74nm. This can also be determined from an X-
ray diffraction pattern.
Atomic Structure 12/29/2012 6:35:00 AM

Evidence for the Electron came from the slowing discharge tube with
500V between the electrodes at both ends. As the pressure was lowered
from 100000Pa to 1Pa The tube became dark and there was a glow at the
anode. This was thought to be because of radiation from the cathode so
were called cahode rays. Crooke thought that these were particles but
Hetrz showed that they could pass through gold sheets so it seemed
unlikely that they were particles. J.J Thomson used an electric and
magnetic field to show that these were charged particles with about
2000x the charge to mass ratio of protons and were called electrons. This
can also be showed using a heated cathode in what’s known as thermionic
emission . In the Maltese cross experiment a metal cross was placed
between the cathode and phosphorescent screen. It cast a shadow that
could be deflected by magnets a principle used in CRT TVs. These
particles had come from the cathode and so must have come from the
atoms so atoms have substructure.
Rutherford then conducted (with gieger and marsden) an alpha scattering
experiment. He fired a narrow beam of alpha particles at a thin sheet of
golf foil. If Dalton’s plum pudding model (electrons is a massless cloud of
positive charge) was true then the particles should all pass through with
very little deviation. This was seen, most particles were deviated by very
small angles. However what was surprising was that 1 in 8000 alpha
particles was deflected by more than 90. This lead him to believe that
most of the atom is empty space and that most of the mass of the atom
is concentrated in a very small space at the nucleus. He also concluded
that the nucleus has a positive charge (the atom is neutral) and so repels
alpha particles resulting in the deviation. Bohr suggested that electrons
orbit around this nucleus. This showed the radius of a gold nucleus to be
7fm whereas that of Hydrogen is 1.2fm. It must be noted that if the alpha
particles have very high energy then they are captured as the strong
force dominates.
However Rutherford wasn’t finished yet, he found that when nitrogen was
exposed to an alpha source, a new even more penetrating particle (the
proton) was emitted that produced a flash on a Zinc Sulphide screen. As a
result the Nitrogen changed into Oxygen-17 by transmutation. Alpha
particles were found to be Helium nuclei and the tracks of a proton in a
cloud chamber showed its charge and its mass. However if the nucleus
had solely protons then there would be unaccounted mass in the heavier
elements as the Helium nucleus is four times as heavy as the Hydrogen-1
nucleus but only has twice the charge.
The initial explanation was that there were also electrons in the nucleus
which cancelled out some of the positive charge, so a helium-4 nucleus
would have 4 protons and 2 electrons. However this was soon revised.
Chadwick and Rutherford believed in a neutral particle in the nucleus
accounting for the extra mass. He conducted an experiment where alpha
particles bombarded Beryllium which resulted in transmutation and a
weakly ionizing radiation. When this was incident on proton rich paraffin
wax (Hydrogen nuclei) then protons wax emitted. One theory was that
these were extremely energetic gamma rays. Chadwick disagreed and
showed that these were neutral particles with a similar mass to protons.
He acknowledged that it could still be gamma radiation if the well
established laws of conservation of energy and momentum were flawed.

Becquerel also noticed that heavier nuclei have to propensity to decay


spontaneously (without external intervention, unlike induced fission). The
noted that in any decay, charge and nucleon number are conserved.
When an alpha particle is emitted Z decreases by 2 and A decreases by 4.
When a neutron decays into a proton (weak interaction) A remains the
same, Z increases by 1 and a beta- particle (-1e) (electron) and electron
antineutrino are emitted too. If there is a lot of energy then a proton can
decay into a neutron then A stays the same, Z decreases by 1 and a
beta+ particle (positron) and electron neutrino are emitted.
Scientists are trying to find substructure in what we believe are
fundamental particles (not made of any smaller particles). This can be
done by electron collisions. These can be elastic (KE conserved) or
inelastic (KE lost in excitation or liberation of electrons). Inelastic
collisions provide evidence for the processes and constituents of atoms. In
the same way an inelastic collision of an electron with a nucleon could
provide evidence of substructure or may knock something out. To do this
electrons needed to be accelerated much more to give a better resolution.
At low energies the electron scattered as predicted if the proton were the
fundamental particle. However at high energies there appeared to be
charged point like particles within protons. These were named quarks with
charge +2/3 and -1/3.
Waves or Particles? 12/29/2012 6:35:00 AM

Black bodies are perfect emitters for their temperature and have a peak
wavelength depending on that temperature. Classical Physics predicted that
at short wavelengths the intensity of radiation would tend to infinity (there
were infinitely many wavelengths that could fit into the cavity) which was
absurd and didn’t match observations. This failure to predict what happens
was known as the ultraviolet catastrophe. Planck suggested that the electron
oscillations in hot bodies could only take discrete values and nothing in
between, these were multiples of hf where f is the fundamental frequency of
that black body (or nhf) If the discrete value was more than what should be
emitted at that temperature then it would not be emitted. This resolved a
problem and produced a prediction that matched the practical curve.
However there was no evidence for this and wasn’t accepted until the idea of
lumpy of quantized energy was used to explain the photoelectric effect.
When EM radiation is incident on the surface of a metal then electrons are
liberated. This can be seen on a charged gold leaf electroscope where loss of
electrons results in the gold leaf falling. Free (delocalized electrons) absorb
the energy from the EM radiation making them vibrate, if this is enough
energy then the electron overcomes the attraction to the metal (potential
well) and is liberated. If ordinary light is used then there is no photoelectron
emission and if a glass plate (absorbs UV leaving only visible light) is placed
in the way then there is also no emission. This demonstrates that there must
be a threshold frequency. Additionally the electrons are emitted with a
variety of kinetic energies and this is proportional to the frequency of the
incident radiation. It was also noted that very intense visible light resulted in
no emission although weak UV resulted in instant emission. This suggested
that energy must be quantized. Einstein explained that the energy in EM
radiation could only be transferred in fixed denominations called photons.
The energy of a photon was proportional to the frequency with the ratio
being the plank constant. He suggested that a minimum amount of energy
was needed to liberate an electron, known as the work function (normally
measured in eV=1.6x10-19J) and that this energy needed to be delivered at
one instant – energy could not practically be stores for more than 10
nanoseconds. If the frequency is too low then the electron will vibrate and
emit another photon, the metal will heat up, if the energy is above or equal
to the work function then any extra in converted into KE.
This could not be explained by the wave theory of light that postulated that
the energy of a wave was proportional to its intensity (no reason for
threshold frequency) and also suggested that there would be a gradual
buildup of energy until the electron was liberated. However the kinetic
energy of the emitted electrons had nothing to do with intensity and there
was no time delay for emission. Also even after long exposure to visible light
there was no emission. This experiment suggested wave particle duality as it
showed light to behave as a particle, however interference patters are
endemic to waves not particles. As a result there is wave-particle duality. To
determine the destination of EM energy it is a wave, to determine how it
interacts with matter it is a particle. Thus intensity is the number of photons
arriving per second. Thus the greater the amplitude at a point, the greater
the probability of detecting a photon there.
De Broglie assumed that if waves behaved like particles then the converse
would be true. He suggested that the wavelength of a mobile particle was
inversely proportional to its momentum with the ratio being the Planck
constant. This was proven later as electrons passing through graphite
appeared to be diffracted – an interference pattern. This showed that the
electrons had higher probabilities of being in some places compared to
others, this happened even when electrons were fired one by one. Increasing
the voltage and therefore momentum squashes the maxima rings together.
This would be expected if the wavelength were to increase as suggested by
de Broglie. It is crucial to note that there will only be diffraction if the particle
interacts with an object with a similar size to its de Broglie wavelength so
massive objects don’t diffract as there is nothing small enough to make them
diffract. This is also important in microscopic imaging as diffraction results in
blur on an image so to resolve tiny details you need a shorter wavelength
and thus a greater resolution. Light has quite a large wavelength but by
accelerating electrons we can get wavelengths small enough to look at
strands of DNA.
Bohr’s original orbital model had a problem, when charged particles are
accelerated they emit radiation so Bohr’s electrons should spiral into the
nucleus. Bohr said that electrons could only have discrete energies and
nothing between, he used the wave particle model to suggest that the
nucleus and edge of the atom were nodes where the probability of finding an
electron is zero. Electrons must have wavelengths with whole number of
ℎ2
loops in the atomic radius, so the kinetic energy of an electron is 𝑛2 8𝑚𝑟2

where n is the number of loops in the stationary wave and the quantum
shell. To change from one level to the other, photons are emitted with the
same electron energy difference; this explained line spectra.
Standard Model 12/29/2012 6:35:00 AM

The standard model is derived from experiments involving decays and


interactions between sub-atomic particles. It suggests that there are two
types of fundamental particle called leptons and quarks, and each particle
has its own equivalent antiparticle. All other particles are thought to consist
of combinations of these fundamental particles.
Leptons are point like particles with lepton number +1 and charge -1 (unless
they are neutrinos or antiparticles). They are subject to weak interactions.
There are three families, the electron, muon and tau. The electron and muon
are very light but the tau is about as heavy as a proton. The lepton family
also includes small neutral particles called neutrinos, these were initially
suggested to account for the variability in the energy of the beta particle and
the recoil nucleus. The beta particle had a spectrum of kinetic energies so if
there were a set amount of energy available for decay (like alpha) then
where did the rest go. The neutrino (in actual fact antineutrino) would carry
away the unaccounted energy and momentum. They were discovered in
1959 with almost zero mass but remain mysterious and they don’t interact
much. Each lepton family has its own neutrino and antineutrino.
There are six quarks, up, down, charm, strange, top, bottom as well as their
antiparticles. They carry a charge of +2/3 or -1/3. They also have a baryon
number of 1/3. These never exist on their own due to quark confinement, if
enough energy is supplied to break them apart, the energy becomes a
quark-antiquark pair, so if an up quark is pulled out of a proton then a
meson and another proton is produced. Thus quarks only exist together as
hadrons – particles that can experience the strong force. Mesons have one
quark and one antiquark, they have very short lifetimes (10^-23) unless
they have a strange quark (e.g. kaon) in which case they have much longer
lifetimes (10^-10). This is a property called strangeness, strange quarks
have strangeness of -1. Mesons also have a baryon number of 0, they
interact with baryons via the strong force to produce protons from neutrons
or vice versa. Baryons consist of three quarks of the same domain. Protons
are the only stable baryons and all other baryons decay to protons, for
example the down quark in a neutron decays into an up quark.
It is important to note than whenever a particle is produced from a photon,
the corresponding antiparticle is also produced, this is called pair production.
This tends to happen near nuclei so momentum is conserved. In the same
way, when an antiparticle meets a particle, they annihilate each other. For
example when an electron meets a positron a Z0 particle is produced. At low
energies this produces a pair of gamma rays, however at high energies, it
produces another electron-positron pair or a quark-antiquark pair with jets of
new particles.

It is important that whenever particles interact or decay, conservation rules


are observed. Charge is conserved, baryon number is conserved, the lepton
number for each type of lepton is conserved. In strong particle interactions
(where the quark type doesn’t change and so there is a strange quark before
and after the event), strangeness is conserved

The standard model also specifies four forces of nature each with their own
exchange particles (gauge bosons), these are necessary as you can’t have
instantaneous action at a distance. These are virtual particles that exist for a
very short time and mediate the force and information
The strong force acts between hadrons and quarks and its exchange particle
is the gluon or the +/- pion. It acts equally strongly between all hadrons. It
overcomes the strong EM repulsion in the nucleus although at short ranges it
is repulsive or else it would reduce the nucleus to a point.
The Electromagnetic force (SF x10^-2) acts between charged particles with
the exchange particle being the massless photon (so infinite range). These
impart momentum and thus can cause attraction of repulsion.
The weak force (SF x10^-6) affects all particles, its gauge bosons are W+,
W- and Z. W bosons carry charge and are exchanged during neutron decay
and subsequently decay into an electron and electron antineutrino. The weak
force is the only way for a quark or lepton to change type. The W boson has
a mass 100x that of a proton so its range is 0.01 times the diameter of a
proton. It takes a lot of energy to create a W boson so it exists for a very
short time and can’t travel far.
Gravitational force is a very weak force (SF x10^-40) so is usually ignored,
however it is long range and is the dominant force that acts between
galaxies. The graviton has been postulated as a gauge boson but there is no
evidence.
These forces are currently distinguishable but at high energies the EM force
and weak force are indistinguishable, they merge to form the electroweak
force. At the big bang, none of the forces were separate, but the separated
as the universe cooled. The challenge is to incorporate the strong force with
the electroweak force to produce a Grand Unified Theory and then also
incorporate gravity to produce a theory of everything.
Creation 12/29/2012 6:35:00 AM

The current scientific view is that the universe was created at the big
bang. We can gather evidence to support this from cosmology as when
we look into space we are essentially looking back in time due to the
delay for the photons to reach earth.
The wavelength of light from galaxies is longer than expected if they were
to be stationary suggesting that they are moving apart and were once
one. This is known as red shift
The chemical composition of galaxies is consistent with that predicted by
the big bang theory which says that composition changed as the
temperature of the universe fell.
Cosmic Microwave background radiation was discovered by accident when
an omnidirectional wavelength of 7.4cm was detected from space. This
wasn’t a fault in the electronics. The distribution of all wavelengths are
consistent with a black body temperature of 2.7K (confirmed for all
wavelengths by the Cosmic Background Explorer Satellite). CMB is the
oldest detectable signal and was originally very short wavelength, but as
the universe expanded then the photons lost energy and are now in the
microwave region as predicted. The have been red shifted from visible to
microwave.

At the beginning the universe emerged from a singularity – 10^-43s is


the smallest unit of time measurable. Then up till 10^-34s matter and
energy were continually interchanging with pair production and
annihilation. As the universe expanded rapidly, the temperature fell.
In the Grand Unification era the universe expanded very rapidly. Here
more particles were created than antiparticles which is a mystery. Gravity
separated. Quarks and leptons were indistinguishable. In the heavy
particle era (10^-10s) the strong force separated and heavy particles like
the W and Z bosons existed and decayed. The temperature overpowered
quark confinement and quarks are gluons moved around in a plasma.
In the light particle era from 10^-7s up till 3s, some quarks and
antiquarks annihilated to produce energy. The excess quarks and gluons
formed hadrons. Then at 30 minutes only protons, neutrons and alpha
particles remained. Then up till 10,000 years (radiation era) the energy
was EM in the form of X-rays, light, UV and radio. This eventually became
CMB. Up to 300,000 years (matter era), electrons were captured to form
atoms and stars formed. The universe became transparent. At 100 million
years galaxies formed and at 10 billion, Earth formed.
Mass and Fate 12/29/2012 6:35:00 AM

Mass is thought to be because of the way particles interact with the Higgs
boson. Heavier particles interact more strongly and so have greater
inertia. This means that they require more force to accelerate. Massive
particles are harder to move as they interact more strongly with the Higgs
field that fills space, this acts like a sort of friction. This is a theory
however the Higgs particle does appear to be found.
It is interesting that only 4% of the mass of the universe is accounted for.
However the motion of stars suggests that there is much more mass. 20-
25% is thought to be weakly interacting massive particles – non baryonic
dark matter. The rest is dark energy which is responsible for increasing
the rate at which the universe is expanding.
The fate of the universe depend of the universe in relation to the critical
density. At the critical density (very unlikely) the rate of expansion will
decrease with time and the universe will expand to a maximum limit (a
flat universe). If the density is more than the critical density then the
universe is closed, it will stop expanding and then contract, ending with a
big crunch. On the other hand if the actual density is lower than the
critical density then the universe would be open and continue to expand
forever.

However our current theories are tenous and are open to change if any
new evidence suggests something different, the dark energy factor raises
a lot of questions, and it is still unclear whether leptons and quarks are
fundamental particles. Hopefully the high energies of the LHC will produce
results.
Stellar Radiation 12/29/2012 6:35:00 AM

We can learn a lot about stars by studying the radiation they emit. From the
wavelengths and brightness, we can determine the temperature and composition of
the star.
Luminosity is the total power radiated by the star. This depends on the surface
temperature and therefore surface intensity as well as the surface area. Luminosity
(W) = Intensity (𝑊/𝑚2) * Surface area (𝑚2).
Brightness is simply the intensity that we perceive and therefore the energy per
second that reaches our pupils. A star may be bright because it has a high
luminosity or because it is very close to the Earth and vice versa. Luminosity is often
measured relative to the sun. We can measure brightness on a scale known as
apparent magnitude. The scale is counterintuitive as the first magnitude is 100x
brighter than the sixth magnitude. So a decrease of 1 in apparent magnitude means
that the light from the star in 2.51 (1000.2 ) times more intense. Our sun has an
apparent magnitude of -26.7 making in 2.5127.7 times brighter than a first magnitude
star. This is a subjective scale so to compare power, other than by luminosity, we
have a similar scale called absolute magnitude which allows for comparison. The
absolute magnitude is the brightness or apparent magnitude of the star if it were 10
parsecs away (1 parsec = 3.26 light years = a parallax angle of 1 arc second) The
data is obtained from observatories on high mountains or in space as the
atmosphere absorbs radiation and so reduced the Apparent Magnitude.
Stars are classified according to spectral class which is dependent on their surface
temperature. Very hot stars look blur as the intensity of short wavelengths is higher,
hot stars look yellow and white and cool stars emit mostly infrared with some red so
appear red. The spectral classes are O,B,A,F,G,K and M.

The Hertzsprung-Russel Diagram shows patterns in the types of stars. The x axis is
surface temperature from high to low with the temperature starting at 40000K and
halving each unit. On the y axis there is relative luminosity (to the sun) with 1 in the
centre and a hundred fold increase for each unit up. Similar types of stars occur in
groups, this suggests that there is a particular sequence of events in the evolution
and death of a star.
Main sequence stars are dwarf stars like the sun that produce energy by the fusion
of Hydrogen, Helium and Carbon. These make up over 80% of all stars
Red Giants are cooler than the sun so appear red, however they are much more
luminous (100x) as they have a much greater surface area due to a large diameter
as they are giants.
White Dwarfs are the remains of old stars. Although they were very hot when they
died, the have a low luminosity due to a small surface area. These take billions of
years to cool down.
Supergiants are very large and very luminous. A supergiant at the same
temperature as the sun in 90000x more luminous so has a diameter 300x greater..

Stars form from an interstellar nebula. There is gravitational attraction between the
hydrogen nuclei and the loss of potential energy leads to an increase in temperature
and kinetic energy. The gas becomes denser and when the temperature is high
enough fusion begins to produce helium, thus raising the temperature, eventually
these can fuse too, raising the temperature further. Luminous main sequence stars
live for a million years, others like our sun live for 10000 million years.

Eventually our sun and stars like it will collapse as the Hydrogen in the core is used
up. This will raise the temperature so Helium in the core fuses and Hydrogen in the
outer layers begin to fuse. This raises the temperature of the outer layer which
expands. The expansion causes the temperature to fall so it becomes a red giant.
Then after along time the fusion of helium raises the temperature of the core
further, producing heavier elements, the star then collapses to form a small hot
white dwarf.
If the star were to have a mass more than 1.4x that of the sun then it could explode
into a smaller white dwarf or collapse suddenly and become a bright supernova. Our
solar system could have arisen as a result of an exploding supernova rather than
from interstellar matter due to the existence of heavy elements.

Black bodies are objects that absorb all radiation that falls onto them and radiate in
out as thermal radiation according to their temperature. This produces a continuous
spectrum of wavelengths with the peak wavelength depending on the temperature.
Cold stars look red as their peak is closer to the red end. Stars at 6000K look
yellow-white and stars at 12000K look blue as the peak wavelength is closer to the
blue end. In a laboratory continuous spectra can be investigated by using a
spectrometer fitted with a diffraction grating and a filament lamp through which the
current is increased. If the temperature of a black body is doubled the power and
therefore intensity increase by 16 fold. Therefore power is proportional to the quartic
of temperature. The sun’s temperature is 5800K and its intensity is 6.5 ∗ 107 𝑊/𝑚2. Its
luminosity is therefore 4.0 ∗ 1026 𝑊. Additionally 43% of the sun’s radiation is visible
unlike the Earth which is mostly infrared. 37% is near infrared and only 7% is
ultraviolet.
Wien’s law states that for a black body 𝜆𝑚𝑎𝑥 𝑇 = 0.0029𝑚𝐾
To calculate the luminosity of a star we find how intensity varies with wavelength
and identify the wavelength at which the maximum intensity occurs. We then use
Wien’s law to determine the temperature and from this the intensity can be
determined. If the radius is known from astronomical observations then the surface
area can be found and hence the luminosity.

Quasars are stars with very large redshifts, so they are moving at speeds
approaching the speed of light, these range from 0.15c to 0.93c. these are therefore
1.3*1010 ly away so they are some of the earliest events in the creation of the
universe. Quasi-Stellar Radio Sources are the brightest objects that have been
observed with luminosities 1000x greater than the brightest galaxies. The first
quasars had intense radio emissions, however most emit visible or X-ray radiation.
The high energy is due to a black hole at the centre of the galaxy. In a quasar the
black hole is so abnormally massive that it absorbs gaseous matter at an enormous
rate, it reaches speeds close to light as it approaches the black hole and so radiates
huge amounts of X-ray, visible and radio radiation. This leads to a huge luminosity
almost exclusively from the small (1ly) region at the centre of the galaxy, thousands
of times greater than the rest of the galaxy.
Spectra 12/29/2012 6:35:00 AM

Emission line spectra are observed when light from a gas discharge tube
is analysed using a spectrometer. Each line has a well defined wavelength
and the set of lines are unique to the element in the discharge tube. This
means that by analyzing line spectra of stars and the line spectra of
nebulae heated up by nearby stars, we can identify wavelengths
characteristic of particular elements and so determine the chemical
composition.
A band spectrum is similar to a line spectrum, however it is produced by
molecules. It consists of a band of light produced by multiple wavelengths
separated by small gaps. For example the band spectrum of TiO can be
seen in the spectral class M stars.
Heating an element or passing a current through causes some electrons
so absorb energy equal to the difference in energy levels (different atoms
will gain different energies) and to be raised into one of the higher energy
levels. This is called excitation and the resultant electrons are in excited
states. This is not stable however so the electron relaxes and move into a
lower energy level. Whilst doing this, a photon with energy equal to the
difference in energy levels. Electrons may move to the ground state
(lowest energy level that can be occupied) by emitting a single photon, or
if it is higher that n=2, it may do so in stages. The fact that there are only
certain well defined frequencies provides evidence that electron energies
are quanitised, meaning that they can only have discrete values. These
values vary between elements and so give different spectral lines. The
energies are small and so are given in electron volts – the energy gained
by an electron accelerated by a p.d. of 1V. E=QV so 1𝑒𝑉 = 1.6 ∗ 10−19 𝐽.
The Hydrogen line spectra can be organized into groups, the Lyman,
Balmer and Paschen series. The Lyman series is UV and due to transitions
into the n=1 level. The Balmer series is visible and due to transitions into
the n=2 level, this is usually seen in the A and B spectral classes. The
Paschen series is IR and is due to transitions into the n=3 level.
Although stars behave like black bodies, their spectra is not perfectly
continuous. There are dark lines crossing the spectra showing some
wavelengths to be missing or having a greatly reduced intensity. These
are absorption spectra and these tell us which elements are absorbing the
light. Only photons with an energy equal to the energy difference
between two levels are absorbed by the interstellar gas and gas in the
outer layers of the star. This raises the electrons into excited states so
well defined frequencies are removed from the spectrum. These electrons
then relax and radiate photons. The intensity in a given direction is
reduced as the re-radiated light is omnidirectional so less goes straight
on. Additionally, the electrons may relax in stages, Emitting more than
one lower energy photons that are in a lower part of the EM spectrum.
This can be simulated in the laboratory by using a sodium flame with a
diffraction grating with a low grating spacing (spectrometer), this
produces 2 very close yellow lines. If A bright light source is used then
there will be two dark lines in the same place. Alternatively white light
can be passed through iodine vapour made by heating crystals in a boiling
tube. This results in equally spaced dark bands. This is not due to the
energy level transitions but by transitions in the quantized vibration
states that Iodine can exist in; these are again well defined ‘lumps’ of
energy.

Spectral Temperature Colour


Spectral Lines
Type (Kelvin)

More than Blue


O Ionized helium
30000K

Blue-white Helium atoms and


B 11,000 - 30,000
hydrogen

White Hydrogen and some


A 7500 - 11,000
ionised calcium

Yellowish Ionised calcium and


F 6000 - 7500
white metal atoms

Yellow Calcium atoms and


G 5000 - 6000
metal ions such as iron

K 3500 - 5000 Orange Metal atoms

Red Molecules of Titanium


M Less than 3500 Oxide producing band
spectra
Expansion 12/29/2012 6:35:00 AM

The Doppler effect is the change in perceived wavelength due to relative


motion between the source of a wave and the observer. The motion
results in the wave fronts being bunched or stretched causing a change in
wavelength and therefore frequency. Red Shift is the observation that
wavelength of radiation from stars has a longer wavelength than radiation
from a similar source in a laboratory on Earth. This shifts the absorption
lines. This, according to the Doppler effect shows that the source is
receding and this is the main evidence for the Big Bang. The greater the
recessional speed the greater the increase in wavelength. We can
𝑣 ∆𝑓 ∆𝜆
calculate recessional speeds using: = =− . However this formula is
𝑐 𝑓 𝜆

an approximation that only works when the velocity of the source is small
compared to the wave velocity.
Some stars are binary stars rotating about their centre of mass, one
recedes and the other advances so the first is red shifted and the other is
blue shifted. Thus the velocities and the rate of rotation can be
determined. For rotating stars and galaxies has spectral lines that are
blue shifted from one side and light that is red shifted from the other, this
allows us to work out their rotational speed. This has shown that the
orbital periods of stars in galaxies can be much higher than expected,
meaning that there is mass unaccounted for, this is dark matter and there
is little certainly about what it actually is.
Edwin Hubble noted from red shift observations that the further away a
galaxy is, the greater its recessional speed, consistent with the big bang
theory. He worked out that the recessional velocity is directly proportional
to the distance of a galaxy from Earth. So 𝑣 = 𝐻𝑑. H is hubbles constant.
It’s value has been updated over the years and is about 65𝑘𝑚𝑠 −1 𝑀𝑝𝑐 −1 .
The Hubble constant allows us to estimate the age of the universe:
The maximum speed of a star at the edge of the universe will be the
𝑐
speed of light. So the edge of the universe is at a distance of – about
𝐻

4600Mpc. This is equal to the speed of light multiplied by the time it has
𝑐 1
taken to get there. So 𝑐𝑡 = 𝐻 and therefore 𝑡 = 𝐻 so

𝑠 3.08∗1022 𝑚𝑠
𝑡 = 𝑀𝑝𝑐 ∗ 65𝑘𝑚 = = 4.8 ∗ 1017 𝑠 = 1.5 ∗ 1010 years.
65000𝑚

Potrebbero piacerti anche