Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Experiment:
Use two microphones connected to a fast timer
A hammer strikes a metal plate generating sound
The near microphone picks up the sound first followed by the far one after
the sound has travelled an extra meter
The electrical pulses trigger the timer to start and stop
The speed is the extra distance/time taken
Remember to repeat, find mean, discard anomalous for accuracy
This causes the surrounding air to vibrate at the same frequency as the
device
String instruments have a stretched string
You pluck or bow (tension overcomes the friction between the hair and the
string) the string making it vibrate and setting up a standing wave
In a guitar the sting transfers its vibration to the body which vibrates as a
whole
The air inside the hollow body resonates at the same frequency as the stings
The top plate is thin and can vibrate easily
The fundamental frequency of a vibrating sting depends on three things:
The mass – more mass/unit length vibrates more slowly – less frequency
Tension – adjusted by tuning pegs – tighter = faster = higher pitch
The length of the string that can vibrate. Can be controlled between bridge
and fret board. Shortening the string reduces the wavelength giving a higher
frequency.
When two musical notes play together they will superpose. If they have
slightly different frequencies then they periodically go in and out of phase
leading to constructive and destructive interference. You get a regular rising
and falling of amplitude The beat frequency is equal to the difference in the
frequencies. You can use a tuning fork and adjust the frequency with the
tuning pegs until the beats disappear and the instrument is in tune.
Graphically when two waves meet the resultant displacement is the vector
sum of the individual displacements.
Two waves are in phase if they are both at the same point in their wave
cycle. They have the same displacement and velocity If waves come from the
same oscillator then they will be in phase.
You can reflect microwaves off a metal plate to set up a standing wave. Then
you can mind the nodes and antinodes by moving a probe between the
transmitter and the plate. Connect the probe to a loudspeaker/meter
Loud and Soft 12/29/2012 6:35:00 AM
Quantisation error is due to the fact that a range of analogue values are
represented by one digital value. This can be reduced by increasing the
number of quantisation levels by using more bits. The most significant bit
is the parity bit 0 is negative voltage, 1 is positive voltage. To increase
the fidelity increase sampling rate and quantisation levels – up bitrate
CDs use laser light of wavelength 500nm. They have a spiral track 0.5
microns wide with a spacing of 1.6 microns. The track has a series of
bumps and lands which represent binary data. The bumps are ¼
wavelength high so light hitting the land travels ½ wavelengths further
than that reflected off the bump and so the path difference makes it 180
out of phase so there is destructive interference so the sensor doesn’t
detect anything = binary 0. There is no destructive interference from the
land so that is sensed as a binary 1.
The laser is kept on track by a tracking mechanism. The diffraction
grating produces 3 beams. The central beam reads the data, the two first
orders are side beams that reflect from the CD surface, they should have
equal intensities on average, if not then an error signal is generated to
correct the tracking. The disc is rotated by a motor, faster in the center
and slower at the edges so data is read at the same rate..
DVDs are similar but use tighter spirals and shorter bumps due to shorter
wavelength light. They also have a more efficient tracking system. They
can also have two layers so their capacity is much greater. HD DVDs use
two layers of dye and different coloured lasers. Are read from the top and
bottom of the same disc.
During writing a laser burns the dye to create a non reflecting layer, this
is then read by a less powerful laser. Rewritable discs have a special
compound that can be remelted so pits can be filled in again.
In youngs double slit experiment a coherent light source (same f and
lambda and a fixed phase difference) diffract and produce an interference
pattern on a screen with a bright central maximum surrounded by
alternating bright and dark fringes. The narrower the slit separation the
wider the pattern. The fringes are formed by superposition. Where there
is a path difference of a whole number of wavelengths there the waves
are in phase and there is constructive interference, if there is a path
difference of an odd number of wavelengths then the waves are 180 out
𝜆𝐷
of phase so there is destructive interference 𝑊 = 𝑠
Time division multiplexing is used to send several digital signals along the
same transmission path. The signal from each device is split up into a
packet of bits by a transducer which sends them is sequence. Upon
reception, they are reassembled by a demultiplexer and then passed on
to the appropriate device.
Radio Transmission 12/29/2012 6:35:00 AM
Ground waves are of two types – surface waves and space waves.
Surface waves are used with waves of wavelength typically around 1 km
these reach the receiver by diffracting around the surface of the earth and
due to their large wavelength they can also diffract around objects
smaller than their wavelength such as hills and buildings. However there
is a slight shadow behind these. The fact that it diffracts gives it a long
range and one transmitter can cover a whole country. However the
strength of the surface wave attenuates by inducing a voltage in the
earth’s surface, this is minimized by a vertical dipole aerial – so the wave
is vertically polarized – electric vertical and magnetic horizontal – this
reduces the contact of the electric field with the surface of the earth.
Space waves are used for TV/FM (VHF) radio. They have wavelengths of a
few metres and can’t diffract around hills or large buildings. These only
travel through line of sight or – due to the density and therefore
refractive index variation of the atmosphere – can refract giving reception
15% beyond the visual horizon (radio horizon)
Sky waves can also be used for transmission. When EM waves are sent
towards the ionosphere (ionized atoms 90-300km above Earth) then it is
refracted by the ionosphere. Longer wavelengths (HF and MW) travel
slower and are refracted more so they are totally internally reflected, this
leaves an area where nothing can be received – a skip zone, the waves
can be reflected bath and forth leaving several skip zones. Whether it is
reflected depends on the frequency, the power of the transmitter, the
angle of incidence and the level of ionization, which varies from time to
time. On the other hand shorter wavelengths have more energy and so
are faster, these are refracted less and can pass through the ionosphere
and so can be use for satellite communications.
AM is broadcast on LW (200kHz), MW (1000kHz) and SW (10MHz). LW
and MW have a channel bandwidth of 9kHz and it is 5kHz for SW. The
channel bandwidth is twice the baseband bandwidth. AM is quite
susceptible to EM interference which is difficult to filter so it is used for
speech where fidelity doesn’t need to be too high. Additionally at night
the surface waves behave as sky waves and travel much further so the
transmitters must reduce their power or use directional aerials to avoid
interference.
FM is much less susceptible to noise which affects the amplitude of
waves. This gives far better fidelity. It is usually used to VHF. It has a
greater bandwidth (200kHz radio/6MHz TV) so can carry much more
information, allowing for stereo transmission.. However it can be
diffracted by small objects and reflected causing multipath interference.
Radio transmission uses frequency division multiplexing so there is a limit
to the number of transmissions that can coexist as the bandwidth is
limited. This is controlled by the government. Copper cables/optical fibres
don’t have the same bandwidth problems as bandwidth can be expanded
by installing more cable.
Sky waves can also be used for satellite communications acting as a relay
with a UHF (4-20GHz) uplink and lower frequency downlink so that there
is less rain and atmospheric attenuation. Geosynchronous/stationary
orbitals are used 35800km high having a 24 hour period so they remain
fixed above a certain point of the Earth so dishes don’t need to change
direction. They have to be positioned carefully so that it doesn’t interfere
with neighboring satellites.
The satellite dish makes radiation spread out like when passing through a
𝜆
single slit with the diameter being equal to the aperture. 𝑆𝑖𝑛𝜃 = 𝑎 gives
the angle from the normal to the first minimum and so can be used to
calculate the satellite footprint which is the portion of the Earth over
which a satellite delivers a specified signal power. Using a small dish will
give a large footprint but with low intensity.
There are also more than 10000 pieces of space debris such as ejected
rockets and disused/broken up satellites in orbit. The UNOOSA has said
that states are liable to any damage caused by their space debris and that
contamination of space should be avoided.
Satellites are expensive to set up and maintain however they can relay
many messages at once, they are reliable and cover a large area with one
transmission.
Optics 12/29/2012 6:35:00 AM
Optical fibres are threads of glass/plastic that carry light or infrared with
minimal attenuation using TIR. A pulse of light is transmitted at one end
and received at the other. The core is surrounded by a transparent
cladding of a lower refractive index. Any rays hitting the cladding at a
angle more than the critical angle will be totally internally reflected.
Around that there is a plastic sheath that strengthens and prevents
scratches that could leak light. It also ensures that light doesn’t pass from
one fibre to another if there is contact as that would lead to very insecure
communication. Monochromatic light is used to avoid dispersion as
different wavelengths travel at different speeds when not in a vacuum.
There is also the problem of multipath dispersion as rays travelling in the
centre travel shorter distances than those reflected so the
bitrate/bandwidth per km (pulses recognizable as separate per second) is
reduced. This can be compensated for by making the cores thinner or by
using a graded index fibre in which the refractive index is gradually
reduced from the centre to the edges. The light near the edges therefore
travels faster over a longer distance so the pulse is not as spread out this
allows a bandwidth of 1 GHz per km. the rays also curve due to
refraction. There is also interference as only certain angles of incidence
called modes allow rays to be transmitted by constructive interference,
other angles cause destructive interference.
Optical fibres are usually used when high bandwidth or long distance
communication is required, the are also much lighter. They have low
losses so there can be large distances between repeaters and they can’t
induce signals in one another so there is no cross talk (as in copper
cables), it is also safe near high voltage equipment as it has high electric
resistance and is also very secure. However they are expensive and not
easy to join together. Copper cables are used for short distances as they
are very cheap and many are already in place. They are easier to join
together and can carry electrical power as well.
Atoms 12/29/2012 6:35:00 AM
There have been many models in the past to try to explain what matter is
made from. Democritus proposed the existence of small indivisible particles
called atoms. He postulated that these discrete particles gave matter its
properties. On the other hand, alchemists were keen to assert that matter
was made up of four key elements – earth, fire, air & water.
In the 17th Century Newton suggested that new particles arise when particles
reform and John Dalton showed that the mass of products is that same as
the mass of reactants. Proust found the law of constant composition which
said that the elements in a compound are in the same ratio of mass,
regardless of quantity. Dalton said, in his law of multiple proportions, that
when elements combine to form multiple compounds then the mass of one
that combines with a fixed mass of the others to produce the different
compounds are in simple whole number ratios. These two laws were the
experimental evidence for atoms as they suggested that at a very small level
there were discrete entities that could not be broken into parts.
Other evidence also pointed at the existence of atoms and molecules:
When bromine vapour from a phial is released into an evacuated gas
diffusion tube then it quickly fills up. If there is air then it diffuses slowly as
collisions with air molecules hinder its progress. Diffusion can’t be explained
if air is a homogenous mass however moving particles allow for diffusion.
The speed shows how fast atoms move and this also shows that there are
spaces between gas molecules for other molecules to move past one
another.
Evidence for the Electron came from the slowing discharge tube with
500V between the electrodes at both ends. As the pressure was lowered
from 100000Pa to 1Pa The tube became dark and there was a glow at the
anode. This was thought to be because of radiation from the cathode so
were called cahode rays. Crooke thought that these were particles but
Hetrz showed that they could pass through gold sheets so it seemed
unlikely that they were particles. J.J Thomson used an electric and
magnetic field to show that these were charged particles with about
2000x the charge to mass ratio of protons and were called electrons. This
can also be showed using a heated cathode in what’s known as thermionic
emission . In the Maltese cross experiment a metal cross was placed
between the cathode and phosphorescent screen. It cast a shadow that
could be deflected by magnets a principle used in CRT TVs. These
particles had come from the cathode and so must have come from the
atoms so atoms have substructure.
Rutherford then conducted (with gieger and marsden) an alpha scattering
experiment. He fired a narrow beam of alpha particles at a thin sheet of
golf foil. If Dalton’s plum pudding model (electrons is a massless cloud of
positive charge) was true then the particles should all pass through with
very little deviation. This was seen, most particles were deviated by very
small angles. However what was surprising was that 1 in 8000 alpha
particles was deflected by more than 90. This lead him to believe that
most of the atom is empty space and that most of the mass of the atom
is concentrated in a very small space at the nucleus. He also concluded
that the nucleus has a positive charge (the atom is neutral) and so repels
alpha particles resulting in the deviation. Bohr suggested that electrons
orbit around this nucleus. This showed the radius of a gold nucleus to be
7fm whereas that of Hydrogen is 1.2fm. It must be noted that if the alpha
particles have very high energy then they are captured as the strong
force dominates.
However Rutherford wasn’t finished yet, he found that when nitrogen was
exposed to an alpha source, a new even more penetrating particle (the
proton) was emitted that produced a flash on a Zinc Sulphide screen. As a
result the Nitrogen changed into Oxygen-17 by transmutation. Alpha
particles were found to be Helium nuclei and the tracks of a proton in a
cloud chamber showed its charge and its mass. However if the nucleus
had solely protons then there would be unaccounted mass in the heavier
elements as the Helium nucleus is four times as heavy as the Hydrogen-1
nucleus but only has twice the charge.
The initial explanation was that there were also electrons in the nucleus
which cancelled out some of the positive charge, so a helium-4 nucleus
would have 4 protons and 2 electrons. However this was soon revised.
Chadwick and Rutherford believed in a neutral particle in the nucleus
accounting for the extra mass. He conducted an experiment where alpha
particles bombarded Beryllium which resulted in transmutation and a
weakly ionizing radiation. When this was incident on proton rich paraffin
wax (Hydrogen nuclei) then protons wax emitted. One theory was that
these were extremely energetic gamma rays. Chadwick disagreed and
showed that these were neutral particles with a similar mass to protons.
He acknowledged that it could still be gamma radiation if the well
established laws of conservation of energy and momentum were flawed.
Black bodies are perfect emitters for their temperature and have a peak
wavelength depending on that temperature. Classical Physics predicted that
at short wavelengths the intensity of radiation would tend to infinity (there
were infinitely many wavelengths that could fit into the cavity) which was
absurd and didn’t match observations. This failure to predict what happens
was known as the ultraviolet catastrophe. Planck suggested that the electron
oscillations in hot bodies could only take discrete values and nothing in
between, these were multiples of hf where f is the fundamental frequency of
that black body (or nhf) If the discrete value was more than what should be
emitted at that temperature then it would not be emitted. This resolved a
problem and produced a prediction that matched the practical curve.
However there was no evidence for this and wasn’t accepted until the idea of
lumpy of quantized energy was used to explain the photoelectric effect.
When EM radiation is incident on the surface of a metal then electrons are
liberated. This can be seen on a charged gold leaf electroscope where loss of
electrons results in the gold leaf falling. Free (delocalized electrons) absorb
the energy from the EM radiation making them vibrate, if this is enough
energy then the electron overcomes the attraction to the metal (potential
well) and is liberated. If ordinary light is used then there is no photoelectron
emission and if a glass plate (absorbs UV leaving only visible light) is placed
in the way then there is also no emission. This demonstrates that there must
be a threshold frequency. Additionally the electrons are emitted with a
variety of kinetic energies and this is proportional to the frequency of the
incident radiation. It was also noted that very intense visible light resulted in
no emission although weak UV resulted in instant emission. This suggested
that energy must be quantized. Einstein explained that the energy in EM
radiation could only be transferred in fixed denominations called photons.
The energy of a photon was proportional to the frequency with the ratio
being the plank constant. He suggested that a minimum amount of energy
was needed to liberate an electron, known as the work function (normally
measured in eV=1.6x10-19J) and that this energy needed to be delivered at
one instant – energy could not practically be stores for more than 10
nanoseconds. If the frequency is too low then the electron will vibrate and
emit another photon, the metal will heat up, if the energy is above or equal
to the work function then any extra in converted into KE.
This could not be explained by the wave theory of light that postulated that
the energy of a wave was proportional to its intensity (no reason for
threshold frequency) and also suggested that there would be a gradual
buildup of energy until the electron was liberated. However the kinetic
energy of the emitted electrons had nothing to do with intensity and there
was no time delay for emission. Also even after long exposure to visible light
there was no emission. This experiment suggested wave particle duality as it
showed light to behave as a particle, however interference patters are
endemic to waves not particles. As a result there is wave-particle duality. To
determine the destination of EM energy it is a wave, to determine how it
interacts with matter it is a particle. Thus intensity is the number of photons
arriving per second. Thus the greater the amplitude at a point, the greater
the probability of detecting a photon there.
De Broglie assumed that if waves behaved like particles then the converse
would be true. He suggested that the wavelength of a mobile particle was
inversely proportional to its momentum with the ratio being the Planck
constant. This was proven later as electrons passing through graphite
appeared to be diffracted – an interference pattern. This showed that the
electrons had higher probabilities of being in some places compared to
others, this happened even when electrons were fired one by one. Increasing
the voltage and therefore momentum squashes the maxima rings together.
This would be expected if the wavelength were to increase as suggested by
de Broglie. It is crucial to note that there will only be diffraction if the particle
interacts with an object with a similar size to its de Broglie wavelength so
massive objects don’t diffract as there is nothing small enough to make them
diffract. This is also important in microscopic imaging as diffraction results in
blur on an image so to resolve tiny details you need a shorter wavelength
and thus a greater resolution. Light has quite a large wavelength but by
accelerating electrons we can get wavelengths small enough to look at
strands of DNA.
Bohr’s original orbital model had a problem, when charged particles are
accelerated they emit radiation so Bohr’s electrons should spiral into the
nucleus. Bohr said that electrons could only have discrete energies and
nothing between, he used the wave particle model to suggest that the
nucleus and edge of the atom were nodes where the probability of finding an
electron is zero. Electrons must have wavelengths with whole number of
ℎ2
loops in the atomic radius, so the kinetic energy of an electron is 𝑛2 8𝑚𝑟2
where n is the number of loops in the stationary wave and the quantum
shell. To change from one level to the other, photons are emitted with the
same electron energy difference; this explained line spectra.
Standard Model 12/29/2012 6:35:00 AM
The standard model also specifies four forces of nature each with their own
exchange particles (gauge bosons), these are necessary as you can’t have
instantaneous action at a distance. These are virtual particles that exist for a
very short time and mediate the force and information
The strong force acts between hadrons and quarks and its exchange particle
is the gluon or the +/- pion. It acts equally strongly between all hadrons. It
overcomes the strong EM repulsion in the nucleus although at short ranges it
is repulsive or else it would reduce the nucleus to a point.
The Electromagnetic force (SF x10^-2) acts between charged particles with
the exchange particle being the massless photon (so infinite range). These
impart momentum and thus can cause attraction of repulsion.
The weak force (SF x10^-6) affects all particles, its gauge bosons are W+,
W- and Z. W bosons carry charge and are exchanged during neutron decay
and subsequently decay into an electron and electron antineutrino. The weak
force is the only way for a quark or lepton to change type. The W boson has
a mass 100x that of a proton so its range is 0.01 times the diameter of a
proton. It takes a lot of energy to create a W boson so it exists for a very
short time and can’t travel far.
Gravitational force is a very weak force (SF x10^-40) so is usually ignored,
however it is long range and is the dominant force that acts between
galaxies. The graviton has been postulated as a gauge boson but there is no
evidence.
These forces are currently distinguishable but at high energies the EM force
and weak force are indistinguishable, they merge to form the electroweak
force. At the big bang, none of the forces were separate, but the separated
as the universe cooled. The challenge is to incorporate the strong force with
the electroweak force to produce a Grand Unified Theory and then also
incorporate gravity to produce a theory of everything.
Creation 12/29/2012 6:35:00 AM
The current scientific view is that the universe was created at the big
bang. We can gather evidence to support this from cosmology as when
we look into space we are essentially looking back in time due to the
delay for the photons to reach earth.
The wavelength of light from galaxies is longer than expected if they were
to be stationary suggesting that they are moving apart and were once
one. This is known as red shift
The chemical composition of galaxies is consistent with that predicted by
the big bang theory which says that composition changed as the
temperature of the universe fell.
Cosmic Microwave background radiation was discovered by accident when
an omnidirectional wavelength of 7.4cm was detected from space. This
wasn’t a fault in the electronics. The distribution of all wavelengths are
consistent with a black body temperature of 2.7K (confirmed for all
wavelengths by the Cosmic Background Explorer Satellite). CMB is the
oldest detectable signal and was originally very short wavelength, but as
the universe expanded then the photons lost energy and are now in the
microwave region as predicted. The have been red shifted from visible to
microwave.
Mass is thought to be because of the way particles interact with the Higgs
boson. Heavier particles interact more strongly and so have greater
inertia. This means that they require more force to accelerate. Massive
particles are harder to move as they interact more strongly with the Higgs
field that fills space, this acts like a sort of friction. This is a theory
however the Higgs particle does appear to be found.
It is interesting that only 4% of the mass of the universe is accounted for.
However the motion of stars suggests that there is much more mass. 20-
25% is thought to be weakly interacting massive particles – non baryonic
dark matter. The rest is dark energy which is responsible for increasing
the rate at which the universe is expanding.
The fate of the universe depend of the universe in relation to the critical
density. At the critical density (very unlikely) the rate of expansion will
decrease with time and the universe will expand to a maximum limit (a
flat universe). If the density is more than the critical density then the
universe is closed, it will stop expanding and then contract, ending with a
big crunch. On the other hand if the actual density is lower than the
critical density then the universe would be open and continue to expand
forever.
However our current theories are tenous and are open to change if any
new evidence suggests something different, the dark energy factor raises
a lot of questions, and it is still unclear whether leptons and quarks are
fundamental particles. Hopefully the high energies of the LHC will produce
results.
Stellar Radiation 12/29/2012 6:35:00 AM
We can learn a lot about stars by studying the radiation they emit. From the
wavelengths and brightness, we can determine the temperature and composition of
the star.
Luminosity is the total power radiated by the star. This depends on the surface
temperature and therefore surface intensity as well as the surface area. Luminosity
(W) = Intensity (𝑊/𝑚2) * Surface area (𝑚2).
Brightness is simply the intensity that we perceive and therefore the energy per
second that reaches our pupils. A star may be bright because it has a high
luminosity or because it is very close to the Earth and vice versa. Luminosity is often
measured relative to the sun. We can measure brightness on a scale known as
apparent magnitude. The scale is counterintuitive as the first magnitude is 100x
brighter than the sixth magnitude. So a decrease of 1 in apparent magnitude means
that the light from the star in 2.51 (1000.2 ) times more intense. Our sun has an
apparent magnitude of -26.7 making in 2.5127.7 times brighter than a first magnitude
star. This is a subjective scale so to compare power, other than by luminosity, we
have a similar scale called absolute magnitude which allows for comparison. The
absolute magnitude is the brightness or apparent magnitude of the star if it were 10
parsecs away (1 parsec = 3.26 light years = a parallax angle of 1 arc second) The
data is obtained from observatories on high mountains or in space as the
atmosphere absorbs radiation and so reduced the Apparent Magnitude.
Stars are classified according to spectral class which is dependent on their surface
temperature. Very hot stars look blur as the intensity of short wavelengths is higher,
hot stars look yellow and white and cool stars emit mostly infrared with some red so
appear red. The spectral classes are O,B,A,F,G,K and M.
The Hertzsprung-Russel Diagram shows patterns in the types of stars. The x axis is
surface temperature from high to low with the temperature starting at 40000K and
halving each unit. On the y axis there is relative luminosity (to the sun) with 1 in the
centre and a hundred fold increase for each unit up. Similar types of stars occur in
groups, this suggests that there is a particular sequence of events in the evolution
and death of a star.
Main sequence stars are dwarf stars like the sun that produce energy by the fusion
of Hydrogen, Helium and Carbon. These make up over 80% of all stars
Red Giants are cooler than the sun so appear red, however they are much more
luminous (100x) as they have a much greater surface area due to a large diameter
as they are giants.
White Dwarfs are the remains of old stars. Although they were very hot when they
died, the have a low luminosity due to a small surface area. These take billions of
years to cool down.
Supergiants are very large and very luminous. A supergiant at the same
temperature as the sun in 90000x more luminous so has a diameter 300x greater..
Stars form from an interstellar nebula. There is gravitational attraction between the
hydrogen nuclei and the loss of potential energy leads to an increase in temperature
and kinetic energy. The gas becomes denser and when the temperature is high
enough fusion begins to produce helium, thus raising the temperature, eventually
these can fuse too, raising the temperature further. Luminous main sequence stars
live for a million years, others like our sun live for 10000 million years.
Eventually our sun and stars like it will collapse as the Hydrogen in the core is used
up. This will raise the temperature so Helium in the core fuses and Hydrogen in the
outer layers begin to fuse. This raises the temperature of the outer layer which
expands. The expansion causes the temperature to fall so it becomes a red giant.
Then after along time the fusion of helium raises the temperature of the core
further, producing heavier elements, the star then collapses to form a small hot
white dwarf.
If the star were to have a mass more than 1.4x that of the sun then it could explode
into a smaller white dwarf or collapse suddenly and become a bright supernova. Our
solar system could have arisen as a result of an exploding supernova rather than
from interstellar matter due to the existence of heavy elements.
Black bodies are objects that absorb all radiation that falls onto them and radiate in
out as thermal radiation according to their temperature. This produces a continuous
spectrum of wavelengths with the peak wavelength depending on the temperature.
Cold stars look red as their peak is closer to the red end. Stars at 6000K look
yellow-white and stars at 12000K look blue as the peak wavelength is closer to the
blue end. In a laboratory continuous spectra can be investigated by using a
spectrometer fitted with a diffraction grating and a filament lamp through which the
current is increased. If the temperature of a black body is doubled the power and
therefore intensity increase by 16 fold. Therefore power is proportional to the quartic
of temperature. The sun’s temperature is 5800K and its intensity is 6.5 ∗ 107 𝑊/𝑚2. Its
luminosity is therefore 4.0 ∗ 1026 𝑊. Additionally 43% of the sun’s radiation is visible
unlike the Earth which is mostly infrared. 37% is near infrared and only 7% is
ultraviolet.
Wien’s law states that for a black body 𝜆𝑚𝑎𝑥 𝑇 = 0.0029𝑚𝐾
To calculate the luminosity of a star we find how intensity varies with wavelength
and identify the wavelength at which the maximum intensity occurs. We then use
Wien’s law to determine the temperature and from this the intensity can be
determined. If the radius is known from astronomical observations then the surface
area can be found and hence the luminosity.
Quasars are stars with very large redshifts, so they are moving at speeds
approaching the speed of light, these range from 0.15c to 0.93c. these are therefore
1.3*1010 ly away so they are some of the earliest events in the creation of the
universe. Quasi-Stellar Radio Sources are the brightest objects that have been
observed with luminosities 1000x greater than the brightest galaxies. The first
quasars had intense radio emissions, however most emit visible or X-ray radiation.
The high energy is due to a black hole at the centre of the galaxy. In a quasar the
black hole is so abnormally massive that it absorbs gaseous matter at an enormous
rate, it reaches speeds close to light as it approaches the black hole and so radiates
huge amounts of X-ray, visible and radio radiation. This leads to a huge luminosity
almost exclusively from the small (1ly) region at the centre of the galaxy, thousands
of times greater than the rest of the galaxy.
Spectra 12/29/2012 6:35:00 AM
Emission line spectra are observed when light from a gas discharge tube
is analysed using a spectrometer. Each line has a well defined wavelength
and the set of lines are unique to the element in the discharge tube. This
means that by analyzing line spectra of stars and the line spectra of
nebulae heated up by nearby stars, we can identify wavelengths
characteristic of particular elements and so determine the chemical
composition.
A band spectrum is similar to a line spectrum, however it is produced by
molecules. It consists of a band of light produced by multiple wavelengths
separated by small gaps. For example the band spectrum of TiO can be
seen in the spectral class M stars.
Heating an element or passing a current through causes some electrons
so absorb energy equal to the difference in energy levels (different atoms
will gain different energies) and to be raised into one of the higher energy
levels. This is called excitation and the resultant electrons are in excited
states. This is not stable however so the electron relaxes and move into a
lower energy level. Whilst doing this, a photon with energy equal to the
difference in energy levels. Electrons may move to the ground state
(lowest energy level that can be occupied) by emitting a single photon, or
if it is higher that n=2, it may do so in stages. The fact that there are only
certain well defined frequencies provides evidence that electron energies
are quanitised, meaning that they can only have discrete values. These
values vary between elements and so give different spectral lines. The
energies are small and so are given in electron volts – the energy gained
by an electron accelerated by a p.d. of 1V. E=QV so 1𝑒𝑉 = 1.6 ∗ 10−19 𝐽.
The Hydrogen line spectra can be organized into groups, the Lyman,
Balmer and Paschen series. The Lyman series is UV and due to transitions
into the n=1 level. The Balmer series is visible and due to transitions into
the n=2 level, this is usually seen in the A and B spectral classes. The
Paschen series is IR and is due to transitions into the n=3 level.
Although stars behave like black bodies, their spectra is not perfectly
continuous. There are dark lines crossing the spectra showing some
wavelengths to be missing or having a greatly reduced intensity. These
are absorption spectra and these tell us which elements are absorbing the
light. Only photons with an energy equal to the energy difference
between two levels are absorbed by the interstellar gas and gas in the
outer layers of the star. This raises the electrons into excited states so
well defined frequencies are removed from the spectrum. These electrons
then relax and radiate photons. The intensity in a given direction is
reduced as the re-radiated light is omnidirectional so less goes straight
on. Additionally, the electrons may relax in stages, Emitting more than
one lower energy photons that are in a lower part of the EM spectrum.
This can be simulated in the laboratory by using a sodium flame with a
diffraction grating with a low grating spacing (spectrometer), this
produces 2 very close yellow lines. If A bright light source is used then
there will be two dark lines in the same place. Alternatively white light
can be passed through iodine vapour made by heating crystals in a boiling
tube. This results in equally spaced dark bands. This is not due to the
energy level transitions but by transitions in the quantized vibration
states that Iodine can exist in; these are again well defined ‘lumps’ of
energy.
an approximation that only works when the velocity of the source is small
compared to the wave velocity.
Some stars are binary stars rotating about their centre of mass, one
recedes and the other advances so the first is red shifted and the other is
blue shifted. Thus the velocities and the rate of rotation can be
determined. For rotating stars and galaxies has spectral lines that are
blue shifted from one side and light that is red shifted from the other, this
allows us to work out their rotational speed. This has shown that the
orbital periods of stars in galaxies can be much higher than expected,
meaning that there is mass unaccounted for, this is dark matter and there
is little certainly about what it actually is.
Edwin Hubble noted from red shift observations that the further away a
galaxy is, the greater its recessional speed, consistent with the big bang
theory. He worked out that the recessional velocity is directly proportional
to the distance of a galaxy from Earth. So 𝑣 = 𝐻𝑑. H is hubbles constant.
It’s value has been updated over the years and is about 65𝑘𝑚𝑠 −1 𝑀𝑝𝑐 −1 .
The Hubble constant allows us to estimate the age of the universe:
The maximum speed of a star at the edge of the universe will be the
𝑐
speed of light. So the edge of the universe is at a distance of – about
𝐻
4600Mpc. This is equal to the speed of light multiplied by the time it has
𝑐 1
taken to get there. So 𝑐𝑡 = 𝐻 and therefore 𝑡 = 𝐻 so
𝑠 3.08∗1022 𝑚𝑠
𝑡 = 𝑀𝑝𝑐 ∗ 65𝑘𝑚 = = 4.8 ∗ 1017 𝑠 = 1.5 ∗ 1010 years.
65000𝑚