Sei sulla pagina 1di 123

Opt 307/407

Electron Beam Methods


In
Microscopy

General Terms and Information


Laboratory Safety
This course is laboratory intensive and you will be spending hours in the labs. The use of
chemicals is limited and none are highly toxic or corrosive. Some of the equipment operates at
either high vacuum or high pressure and normal laboratory safety procedures should be followed.
When appropriate PPE (personal protective equipment) is available for use. Some of the
equipment used in the lab generates x-ray radiation. This radiation is adequately shielded and
interlocked from the lab environment and should not present a hazard of any kind.

If there are any questions please refer to the University health and safety training office at:
http://www.safety.rochester.edu/training/rclabtraining.pps

Resolving Power

resolving power: A measure of the ability of a lens or optical system to form separate and
distinct images of two objects with small angular separation. Note 1: An optical system cannot
form a perfect image of a point (i.e., point source). Instead, it performs what is essentially a
Fourier transform, and the resolving power of an optical system may be expressed in terms of an
optical transform (transfer function) called the modulation transfer function (MTF). Note 2: The
resolving power of an optical system is ultimately limited by (a) the wavelength involved, and
(b) diffraction by the aperture, a larger aperture having greater resolving power than a smaller
one.

Resolution
The resolution of an optical microscope is defined as the shortest distance between two points on
a specimen that can still be distinguished by the observer or camera system as separate entities.
An example of this important concept is presented in the figure below (Figure 1), where point
sources of light from a specimen appear as Airy diffraction patterns at the microscope
intermediate image plane.

The limit of resolution of a microscope system refers to its ability to distinguish between two
closely spaced Airy disks in the diffraction pattern (noted in the figure). Three-dimensional
representations of the diffraction pattern near the intermediate image plane are known as the
point spread function, and are illustrated in the lower portion of Figure 1. The specimen image is
represented by a series of closely spaced point light sources that form Airy patterns and is
illustrated in both two and three dimensions. What has happened is that the light coming from the
point sources has been diffracted into several different orders represented by the concentric
circles. The same type of thing happens when light hits a microscopic specimen; the diffraction
orders spread out. The bigger the cone of light brought into the lens, the more of these diffraction
orders which can be collected by it, and the more information it has to form a resultant image
and the higher the resolving power of the lens will be. The bigger a cone of light that can be
brought into the lens, the higher its numerical aperture is. Therefore the higher the numerical
aperture of a lens, the better the resolution of a specimen will be which can be obtained with that
lens. Resolution is a somewhat subjective value in optical microscopy because at high
magnification, an image may appear unsharp but still be resolved to the maximum ability of the
objective. Numerical aperture determines the resolving power of an objective lens, but the total
resolution of the entire microscope optical train is also dependent upon the numerical aperture of
the substage condenser. The higher the numerical aperture of the total system, the better the
resolution.

Abbe’s “diffraction-limited” relationship adequately describes the resolution in optical systems,


including those with electron optics (as a first approximation).

0.61
d0  … (1) n sin   NA (numerical aperture) of the objective lens
n sin 
Where d 0 = the minimum resolvable separation in the object
d 0 = resolution

 = wavelength of the illumination


 = angular aperture of the objective lens
n = refraction index in the space between object and objective lens

Substituting (for optical microscopes). 500nm for  , 1.35 for nsin  (100x oil immersion
objective), we get d 0 = ~226nm. This is typically considered the best resolution for optical
microscopes.

In electron microscopes we have to consider the much shorter wavelength of accelerated electron
beams. This brings up the whole topic of the de Broglie wave nature of electrons. Simply stated
electrons (or any matter) can be thought of as having a characteristic wavelength as given by the
de Broglie relationship:  = h/m0v where h=Planck’s constant
m0=rest mass of an electron
v=velocity of the electron

Given non-relativistic velocities for electrons accelerated through potentials of <50KV this
relationship gives a wavelength for electron beams at a fraction of a nanometer. For higher
energy beams the mass of the electrons needs to be modified to take into account their relativistic
proportion (as they approach the speed of light). As a result a 100KV potential will produce an
electron beam with a wavelength of 0.0037 nm. A more general equation for calculating these
wavelengths is shown below:

h2

2m0 eV

where h  6.626  10 34 Joule-sec


m0  9.1096  10 31 kg
V  acceleration voltage
e  1.602  10 19 Coulomb
12.3 
Which simplifies to:   where  in 
V
If only diffraction related aberrations are considered (as in the Abbe relationship), then the

resolution of a TEM system operating at 100KV would be about .025  , which is
unrealistically low. Other aberrations that affect resolution will be dealt with more completely in
a subsequent section on electron lens issues.

Magnification vs. Resolution


Don’t confuse them. Many optical systems are capable of high magnification in the final image.
Magnification without resolution is called empty magnification. Most low-cost optical
microscopes offer only this empty magnification. Much effort and cost goes into making
aberration corrected lens systems that can adequately separate the Airy disks in the image
formation process.
General Discussion of the SEM/TEM System
Vacuum Systems
Vacuum systems are a necessary evil given the operational parameters of all electron
microscopes. This is because the mean free path for an electron at atmospheric pressure is only
about 10cm. By removing air molecules from the electron beam path that distance can be
increased to a few meters. This means that we don’t have to worry about electron scattering in
our systems. In addition a vacuum will provide a clean and oxidation resistant environment in
which to view samples.

Quality of Vacuum

There are ranges of vacuum conditions. Anything lower in pressure than an atmosphere can be
termed a vacuum, but for our purposes these are the relevant ranges:
-2
Low: 760-10 Torr
-2 -5
Medium: 10 -10 Torr
-5 -8
High: 10 -10 Torr
-8
Ultrahigh: <~10 Torr
Measuring Vacuum

There are many ways to measure vacuum levels in systems. Some of the most common are
listed here:

Manometer
Thermocouple Gauge
Pirani Gauge
Cold cathode Gauge
Penning Gauge
Ion pump current
Pumps for Achieving Vacuum Conditions

Rotary Pumps

Always in the Foreline of the system


Exhausts pumped gases to atmosphere
Pumping rate decreases as vacuum increases
Usually has a low VP oil as a sealant to facilitate pumping

-2
Cannot pump <10 Torr
Noisy
Backstreams
Vibration
Maintenance

Turbo Pump Basics

Direct drive electric motor-gas turbine


Rotor/stator assembly
Moves gas molecules through the assembly by sweeping them from one to another
High rotational speed (>10,000 RPM)
Very clean final vacuum

Needs a Foreline pump


Costly
Can fail abruptly
Whine
Needs to be protected from solid material

Diffusion Pump Basics

No moving parts
Heated oil bath and condensing chamber
Jet assembly to redirect condensing gas
Recycle of oil
Pressure gradient in condensing chamber/Foreline pump removes from high pressure side

Heat up/cool down time


Needs foreline pump
Can make a mess in vacuum failures/overheating
Needs cooling water (usually)

Ion Pump Basics

High voltage creates electron flux


Ionizes gas molecules
Ions swept to titanium pole by magnetic field
Titanium erodes (sputters) as ions become embedded
Getters collect Ti atoms and more gas ions
Current flow indicates gas pressure (vacuum)

-5
Cannot work until pressure is <10 Torr
Low capacity “storage-type” pump
Needs periodic bake-out
Hard to startup (sometimes)

Summary

All electron microscopes require a vacuum system


Usually consists of rotary-(turbo, diff)-(ion) pumps
System should provide clean oil-free vacuum
-5
At least 10 Torr or so

Vacuum problems are some of the most challenging to find and fix, and may even be caused by
samples outgassing
Electron Sources
In order to function the electron microscope must of course have a source of electrons, which is a
portion of its illumination system. These illumination electrons are produced by the electron
gun. The electron gun consists of three parts, the filament, the shield, and the anode.
Some of the alternative names for the filament include cathode or emitter.

In order to form an image in an electron microscope one must first create a nearly
coherent source of electrons to form the primary beam. Although features such as lenses,
apertures, and stigmators are important in controlling the geometry of the primary beam
all of these are dependent on the size, strength, and shape of the initial electron source.
Basically there are two major categories of electron emitters used in SEMs. The first of
these represents a class of electron sources that emit electrons as they are heated. These
thermionic emitters operate on the principal that as certain materials are heated the
electrons in the outer orbitals become unstable and are more likely to fly free of their
atoms. These lost electrons are replaced by a current source that is also attached to the
emitter. The ability to give up electrons is related to a material’s “work function.” The
work function of a material can be given by the equation E = Ew + Ef where E is the
total amount of energy needed to remove an electron to infinity from the lowest free
energy state, Ef is the highest free energy state of an electron in the material and Ew is
the work function or work required to achieve the difference.

Materials with a small work function are better thermionic emitters than those with a
large work function, but there is a trade off. Although tungsten has a relatively high work
function it also has the highest melting point of all metals. A large number of electrons
can be obtained below its melting point giving tungsten filaments a longer working life
and making them useful filaments. A filament is said to be “saturated” when further
heating of the filament does not result in an increase in the number of electrons emitted.
[False peak caused by region of filament that reaches emission temp before tip]

Thus a thermionic emitter consists of three major components; an electron source to


replenish emitted electrons, a heating element, and the emitter material itself. In most
SEMs thermionic emitters are either tungsten filaments or crystals of Lanthanum
hexaboride (LaB6).

The most common of these types of electron sources is the tungsten filament. This
consists of a bent piece of fine tungsten wire that is similar to the filament present in an
incandescent lamp. The filament is connected to a source of current and electrons are
passed through the wire. As this happens the filament heats up and electrons begin to be
emitted from the tungsten atoms. In this type of gun the outside source of electrons and
the heating source are one in the same.

Electrons are preferentially emitted from the bent tip of the filament and produce a point
source of electrons in a fairly small area. Because the filament is bent in a single plane
the geometry of this region is not perfectly circular. The size of this area can be further
reduced by increasing the angle of the filament bend or by attaching a finely pointed
crystal of pure tungsten to the tip of the filament.

Despite these modifications the size of this electron source is still relatively large. The
cloud of primary electrons is condensed by the shield (Wehnelt Cylinder, Wehnelt cap,
Gun cap) that surrounds the filament. This is achieved by connecting both the filament
and the cap to the high voltage supply. The aperture of the shield forms an opening that
surrounds the cloud of electrons produced from the filament with a net negative charge.
Since this repels the electrons the shield acts as a converging electrostatic lens and serves
to condense the cloud of electrons. Since this is a prefocused lens (i.e. we cannot easily
change its shape or strength) the distance between the tip of the filament and the shield
aperture is critical.

If one places a resistor between the filament and the shield one can produce a slightly
greater negative potential (about 200 eV) on the Wehnelt cap than on the filament. This
difference in negative electrical potential is known as the “bias” and every EM
manufactured in the past twenty years or so comes with a biased emitter. The main
advantage of a biased emitter is that the effects of the anode are felt most strongly in a
region just slightly in front of the filament creating a depleted region or “well” into which
electrons emitted from the filament accumulate. This region acts as condensed zone from
which electrons are drawn for imaging and it is not as important that all the electrons
come from a small spot on the filament itself. In this way we can keep the filament
current a little lower, the filament a little cooler, and make it last a lot longer.
It is one of the goals of any operator to maximize beam current (# of electrons that go
into making up the illumination beam) while minimizing the filament current
(# electrons needed to heat the filament to the point of emission). The biased type of gun
assembly allows us to do this.

Together the filament and cap act as the cathode of a capacitor. It is important that the
filament be properly centered in relation to the opening of the Wehnelt cap and be the
proper distance from the opening. Otherwise an off center beam that is either
weak/condensed or bright/diffuse will be produced. The electrons emitted from the
filament are drawn away from the cathode by the anode plate which is a large circular
plate with a central opening or aperture. The voltage potential between the cathode and
the anode plate accelerates the electrons down the column and is known as the
“accelerating voltage” and is given in terms of KV. Together the Wehnelt cylinder and
anode plate serve to condense and roughly focus the beam of primary electrons.

Despite this the area occupied by the primary beam is still quite large. This manifests
itself in a loss of primary electrons to such things as apertures and other column
components that drastically reduces the number of electrons eventually reaching the
sample. This then leads to a reduced S/N ratio. To improve on this a LaB 6 gun is often
used.

LaB6 Emitters
A LaB6 gun consists of a finely pointed crystal of lanthanum hexaboride. When heated by
surrounding ceramic heaters, electrons are emitted from the tip of this crystal. The lost
electrons are replaced by an electron source. Because the size and geometry of this region
of electron source is smaller than with a standard tungsten filament electron produced
from a LaB6 gun are more likely to actually make it all the way to the sample and thus
help to increase the S/N ratio. Despite these gains there is yet an even smaller, brighter
source of’ electrons available in some SEMs.
Field Emission Emitters

A field emission gun operates on a different principle than does a thermionic emitter. Like
a LaB6 or pointed tungsten filament the field emission gun uses a finely tipped oriented
tungsten crystal. The difference however is that the electron source is not heated to
remove electrons and for this reason are often referred to as being “cold” sources. Instead
electrons are drawn from the filament tip by an intense potential field set up by an anode
that lies beneath the tip of the filament. Electrons are then pulled from a very small area
of the pointed tip and proceed down the column. Often this is aided by a second anode
that lies beneath the first. Acting like an electrostatic lens the two anodes serve to further
coalesce and demagnify the beam. The lost electrons are replenished by an electron
source attached to the tungsten tip. A primary electron beam generated by a field
emission source offers significant advantages over those produced by thermionic
emitters. Because of the smaller initial spot size ( < 2.0 nm vs. 4.0-8.0 nm), and lower
(
accelerating voltage 2-5 KV vs. 15-20 KV) a much smaller primary excitation zone is
produced. Ultimately this results in much greater resolution than is capable with a
conventional SEM using a tungsten filament or LaB6 crystal.

All of these different electron sources offer advantages and disadvantages. Factors such
as cost, filament life, vacuum requirements, operating life, etc. all play a part in deciding
which type of source to use.
Electron Lenses

Electron lenses are the magnetic equivalent of the glass lenses in an optical microscope and to a
large extent we can draw comparisons between the two. For example the behavior of all the
lenses in a TEM can be approximated to the action of a convex (converging) glass lens on
monochromatic light. The lens is basically used to do two things
 either take all the rays emanating from a point in an object and recreate a point in an
image,
 or focus parallel rays to a point in the focal plane of the lens.
A strong magnetic field is generated in an electromagnetic lens by passing a current through a set
of windings. This field acts as a convex lens bringing off axis rays back to a focal point.

Diagram of forces on electrons moving through a magnetic lens

Three types of electromagnetic lenses: simple winding, soft iron encased, soft iron pole
pieces.
The resultant image is rotated, to a degree that depends on the strength of the lens, and the focal
length of the lens can be altered by changing the strength of the current passing through the
windings.

Most contemporary SEMs and TEMs use a multiple condenser system to project the beam onto
the samples. C1, the first condenser lens, is shown highlighted in the following diagram. Its
function is to:
 Create a demagnified image of the gun crossover.
 Control the minimum spot size obtainable in the rest of the condenser system.

All lenses in a typical SEM are converging lenses, and the final lens, labeled C2 in the diagram,
(sometimes mistakenly called the objective lens) is used at it’s crossover, or focal, point. The
distance between the exit of the final condenser lens and the focussed beam is called the working
distance. If your sample is placed at this point it will appear to be in focus. Working distance is
useful in determining where your sample surfaces are spatially, as well as leading you toward a
better imaging condition.

Lens Aberrations

Due to inherent defects and factors concerning lenses and electron optical systems, there
are a variety of abnormalities, or aberrations, that must be corrected in an electron
microscope. [Note: the word aberration is spelled with one b and two r’s, it is not
Abberation after Ernst Abbe] If it were possible to completely correct for all of the lens
aberrations in an EM our actual resolution would very nearly approach the maximum
theoretical resolution. In other words if all lens aberrations could be eliminated our
numerical aperture number would equal 1.0 and Abbe’s equation for calculating
resolution would be about the wavelength/2. Whereas we have been able to approach this
in light optics, the nature of electromagnetic lenses makes this goal much more difficult
to obtain.

Spherical Aberration

The major reason that lens aberrations are so difficult to correct for in electromagnetic
systems is that all electromagnetic lenses are like bi-convex converging lenses. Coils of
wire surrounding a soft iron core create an electromagnetic field which affects the
electron beam. This field influences the electron beam in the same way that a converging
glass lens affects incoming light. Different rays of light can be brought to focus or
“converge” on a single focal point which lies a given distance from the lens. This is what
enables one to start fires with a hand-held magnifying glass.

One problem that arises in doing this is that beams entering the lens near the perimeter of
the lens are brought to focus at a slightly different spot than are those which enter the lens
near its center. The problem becomes more pronounced the further the entering beam is
from the optical axis of the lens. This differential focusing of various beams is known as
spherical aberration and can easily be seen in less expensive light microscopes in which
the perimeter of the image is noticeably out of focus while the center region is not.

Perhaps the easiest way to minimize the effects of spherical aberration is to effectively
cut off the outer edges of the lens in which most of the problems arise. Since this is
impossible to do with an electromagnetic field, we do the next best thing and place a
small aperture either in the center of the magnetic field or immediately below it. This
serves to block out those beams that are most affected by the properties of the converging
lens.

Chromatic Aberration

A second type of lens problem that effects electromagnetic lenses is chromatic aberration.
The word chromatic comes from the Greek word ‘chromos” meaning tone or color.
Different colors of visible light have different energies or spectra. When two beams of
light of different energies enter a converging lens in the same place from the same angle,
they are deflected differently from one another. In light optics the illumination with the
higher amount of energy (i.e. shortest wavelength) is deflected (refracted) more strongly
than wavelengths of lower energy (longer wavelength). Thus a blue beam would be
focused at a shorter focal plane than would be a red beam.

In the electron microscope the exact opposite is true in that illumination with a higher
energy are deflected less strongly than those of lower energy.
This difference between light and electron optics is due to the fact that electrons are not
refracted but rather are acted upon by force vectors.

The net effect of chromatic aberration is the same however in that wavelengths of
different energies are brought to focus at different focal points. This difference in focal
points of the beams serves to distort the final image.

Despite the fact that all of the images in an EM are “black and white” we are still faced
with the problem of chromatic aberration. If the electrons in the electron beam are
travelling at different velocities from one another they will each have their own particular
energy or wavelength. These differences in wavelength have the same effect on electrons
entering an electromagnetic lens as they would on light beams entering a glass lens. To
correct for chromatic aberration in a light microscope one commonly used technique is to
place a blue filter over the illumination source. This serves two purposes. First, it insures
that all of the light entering the optical column is of essentially the same energy. Second,
because blue light has the shortest wavelength of the visible spectrum, it helps to
improve resolution. The way in which the problem is solved in the EM is to have an
extremely stable accelerating voltage to create the electron beam. By keeping the current
of the lens systems and accelerating voltage stable we help to insure that all the electrons
are moving at the same speed or wavelength. This serves to greatly reduce the effects of
chromatic aberration. A second thing that helps is once again the aperture. Since the
effects of chromatic aberration is most pronounced near the perimeter of a converging
lens, the aperture serves to stop those electrons that are farthest from the optical axis.
Diffraction

In its simplest terms, diffraction is the bending or spreading of a beam into a


region through which other beams have passed. Each particular beam sets up its own
waves. When light goes through a tiny aperture we not only get a bright spot but a series
of concentric rings, and if these rings of the Airy spot overlap enough, the two spots will
appear unresolved. The way to reduce the effects of diffraction is to have as great an
angle as possible between the optical axis and the perimeter of the lens. This would mean
having no apertures at all. Thus we have a quandary; the smaller the aperture the less
chromatic and spherical aberration we have, but it also means that we will have much less
illumination and greater diffraction problems. The size of the final aperture must be
carefully chosen to minimize but not eliminate the various aberrations while still getting
acceptable image formation.
Astigmatism

The fourth optical defect that we need to correct for in an EM is called astigmatism. Astigmatism
refers to the geometry or shape of the beam. Basically the beam is spread unevenly in one axis or
the other producing an elliptical shape rather than a circular one. This is caused by imperfections
in the focusing fields produced by the electromagnetic lenses, apertures, and other column
components. As precisely machined as these parts are any imperfection in machining can cause
astigmatism. To correct for astigmatism a set of magnets is placed around the circumference of
the column. These magnets are then adjusted according to strength and position in an effort to
induce an equal and opposite effect on the beam. By “pulling” at the center of the ellipse at an
angle perpendicular to the long axis the beam is once again made circular. Today astigmatism is

Bore of Microscope Column

Individually
Energized Coils

Corrected (compensated) Beam


Astigmatic Beam

Octupole Electron Beam Stigmator


corrected with a set of tiny electromagnets in matching pairs whose strength is electronically
controlled. In earlier microscopes a set of eight tiny fixed magnets were screwed in and out to
vary their strength and rotated around the column to alter their position relative to the beam.
Sources of distortion and image degradation in the SEM

Although we would like to think of the “spot size” as a single and discrete size, in reality the
final spot size of the SEM follows a Gaussian distribution. The reason for this is that certain
aberrations of the electromagnetic lens systems manifest themselves in such a way that all of the
electrons entering the lens are not brought to focus at a single infinitely small spot. By summing the
variances caused by these aberrations one can approximate the final spot size. To calculate the
theoretical diameter of an electron probe carrying a given current, we take the square root of the
sum of the squares of the separate diameters caused by various aberrations.

dprobe = (d2chromatic + d2spherical + d2diffraction ± d2astigmatism)l/2

Because it is the most critical portion of the lens system, these aberrations are usually only
calculated for the final lens, and in practice the absolute values of these aberrations is of limited
value. Image adjustments are made “on the fly” so as to minimize each of these effects.
The SEM System

Spot Size
Ultimately resolution in the SEM is dependent upon the spot size of the primary beam. The
smaller the spot size the better will be the image resolution assuming that all other factors
remain unchanged. Spot size is influenced by the current strength of all the lenses (not just
the final lens) and the apertures used. It is further influenced by the geometry of the final
lens field. Despite precision machining and lens construction each lens will be slightly
elliptical rather than perfectly circular. The effect of this asymmetry is that the electrons
diverging from a single point will focus to two separate line foci, at right angles to each
other rather than to a point as the lens current is varied. The inability to focus to a point in
different focal planes is known as astigmatism. The geometry of the final spot size will
match that of the lens field and be slightly elliptical rather than perfectly circular. The net
effect of this is to increase the spot size and reduce resolution. To correct for inherent lens
astigmatism we apply a small magnetic field to the final lens. This field should be of equal
strength to and in the opposite direction to inherent astigmatism of a given lens. To achieve
this, a set of electromagnets are placed around the final lens. The current applied to each of
these electromagnets is controlled by the stigmator. The stigmator usually has two controls,
one of which adjusts the relative counteractive strength or magnitude of the field and the
other which controls the direction of the lens asymmetry. Together they are used to balance
out and nearly negate the effects of the inherent lens asymmetry. This results in a more
circular spot size and a maximization of resolution.

Signal to Noise Ratio


The second major factor that affects resolution in the SEM is the image signal-to-noise ratio that
exists. This ratio is often represented as S/N and the operator seeks to maximize this value for each
micrograph. The electronic noise introduced to the final image is influenced by such factors as
primary beam brightness, condenser lens strength, and detector gain. As the resolution of a picture
is increased its brightness decreases and the operator must balance all the competing factors to
maximize the S/N ratio by increasing the total number of electrons recorded per picture point.
Although this can be done by varying lens strength, aperture size, stigmator strength, working
distance, and detector gain, all of these factors are dependent on the initial electron source.

Bits and Pieces of the SEM


The optical paths of the illumination beam in light microscopes and TEMs are nearly
identical. Both types of microscopes use a condenser lens system to initially converge the
beam onto the sample. Next the beam penetrates the sample and the image is magnified by
the objective lens. Finally a projector lens projects the image onto the viewing plane (either
a photographic plate, fluorescent screen, or human eye). In its formation of an illumination
source and in the condensing of the beam the SEM is nearly identical to a TEM. However,
aside from this similarity the SEM and TEM differ significantly. Rather than encounter a
specimen, the beam of the SEM is next influenced by a set of deflection or “scan” coils that
move the beam in a precise raster or scan pattern. The beam is then further condensed to a
fine spot by a final lens (not an objective lens) and only then encounters the specimen.
Rather than penetrate the specimen the beam either bounces off and/or produces signals that
are then interpreted by a specialized signal detector. In this way the SEM is more similar to
a dissecting microscope than it is to a compound microscope. Like a dissecting microscope
a SEM only provides information about the surface of the specimen and not the internal
contents.

The support systems of the SEM include the vacuum system, compressed gas system, cooling
system, and electrical system. Each of these is critical to proper operation the SEM. The guts,
however, are contained within the electron column and its associated electronic controls.

The column of an SEM contains the following components:

Filament (Cathode) - Produces free electrons by thermionic or field emission of tungsten or


other material.

Wehnelt Cylinder - Used to concentrate electron beam

Anode Plate - Produces high voltage differential between it and the cathode. Used to
accelerate the free electrons down the column.

Condenser Lens - Reduces the diameter of the electron beam to produce a reduced spot
size.

Scan Coils - Electromagnetically shift the electron beam to produce a scan pattern
on the sample.

Final Lens - Focuses as small a spot as possible on the surface of the specimen.
Smallest spot is about 1 nanometer.
[In addition to these major components there are usually also fixed and variable apertures which
help in refining the electron beam image.]

Detectors - Also within the scope chamber but not part of the column are the
detectors. Often these operate at high voltages too.

Scan Coils- The scan coils lie within the column and are responsible for
moving the electron beam over the surface of the specimen. The scan
coils are essentially electromagnetic coils and are arranged in two
pairs around the beam. One pair of coils is responsible for controlling
movement of the beam in the X direction while the other pair moves it
in the Y direction. To do this the scan coils are controlled by
electronic components within the console. Chief among these is the
scan generator. The scan generator is connected to other components-
the magnification module and the cathode ray tube (CRT). Likewise,
the magnification module is connected to the scan coils.

Sample Chamber- This is where the ultimate target for the accelerated electron beam
goes. Your samples are placed within the vacuum system and
surrounded by detectors for modulating the image brightness and
contrast.

To understand how these function together in a SEM we need some background


information. A television is essentially the same thing as a CRT. Like a SEM, a TV produces
its image by building it up point by point. Next time you have the chance look carefully at
your TV screen with a magnifying lens. The image is produced by a series of tiny dots that
alternately light up or turn off. These dots are energized by an electron gun at the back of
your TV. Most color TVs have three such guns, each responsible for activating the red,
green, and blue elements of your TV screen. The dots are arranged in parallel rows and each
can be assigned a particular X-Y coordinate. For example, if we had a 1000 line screen with
each line having 1000 dots or steps, we could designate each spot precisely:

500,0

0,0

0,500

500,500

1000,1000

The electron gun sweeps across the screen at great speed pausing long enough on each dot
to either activate it or not. When it reaches the bottom of the screen it returns to the first
point and begins again. This movement is referred to as the raster or synchronous scan
pattern. The scan generator acts by establishing this raster pattern and coordinating it
between the scan coils and the viewing CRT. In this way the pattern over the sample is in
exact synchrony with that observed on the CRT.

Information is sent to the CRT through one of the detectors. For this example we will speak of a
secondary electron detector. When the electron beam strikes the sample it generates secondary
electrons. The detector responds to the number of electrons being given off by the sample and this is
translated to the CRT as intensity amplitude. A strong signal will be enough to illuminate several
dots on the screen; a weak signal will mean that no dots will be illuminated by the CRT electron
gun. The detector therefore modulates the intensity of the signal, and the raster pattern gives the
location of the signal. In this way the image on the CRT is built up point by point to match what is
happening on the surface of the sample.
This way of forming an image is the essential difference between transmission and scanning
types of microscopes. SEM image formation differs in several important ways. First, focus
is dependent upon the size and shape of the electron beam spot. The smaller the spot on the
sample, the better the focus. Secondly, magnification is not produced by a magnification or
enlarging lens but rather by taking advantage of the difference in the size of the scan pattern
on the sample and the size of the CRT. The latter is fixed. The size of the scan pattern on
the sample is variable and is determined by the magnification electronics. By decreasing the
size of the area which is scanned and conveying that to the CRT we increase the
magnification of the image proportionally. The smaller the area scanned, the less distance
between raster points, and the smaller the amount of current needed to shift the beam from
point to point. The greater the area scanned, the lower the magnification, the greater the
distance between raster points, and the greater the amount of current needed to shift the
beam from point to point. In this way, when we operate the SEM at relatively low
magnifications, we actually push the scan coils to their extremes (which may in fact cause
overheating of the scan generator components…never leave a SEM scanning at low
magnifications for an extended period of time). Thus SEM magnification is really just a
ratio of the CRT size to the raster size:

Mag = CRT size/Raster size (on sample).

Depth of Field
The SEM cannot match the TEM in terms of resolution. The SEM excels, however, in depth
of field. Depth of field is defined as the extent of a zone on the specimen which appears in
focus at the same time. This is often confused with depth of focus which is defined as the
depth of field in image space, not specimen space. Because of its particular design these
terms are fairly interchangeable for the SEM, but for clarity’s sake, we will use the term
depth of field in referring to the workings of the SEM.

The primary factor influencing the depth of field in an SEM is the angle of the beam
divergence from the final lens. The smaller this angle, the greater will be the depth of field.
As the beam scans above and below the plane of optimum focus the diameter of the
illumination spot is increased. Depending on the distance between raster points and the size
of the emission area, this increase in size of the primary beam spot may or may not have an
effect on what appears to be in focus. The greater the distance from the plane of optimal
focus that structures on the specimen still appear in focus, the greater the depth of focus.

Since depth of field is ultimately dependent of the geometry of the primary beam it can be
controlled by altering either working distance or the diameter of the final aperture. The working
distance is defined as the distance between the final lens pole piece and the uppermost portion of an
in focus specimen. By increasing the working distance the strength of the final lens must be
decreased in order to bring the plane of optimal focus in line with the top of the specimen. In doing
so the angle of the incident beam is decreased from what is present at a smaller working distance.
This is analogous to the concept of numerical aperture for light optics.

The angle of the primary beam can also be reduced by using a smaller final lens aperture. This can
be used when the working distance cannot be increased further. The major drawback to using ever
smaller final lens apertures is that you reduce the intensity of the illumination proportionately. By
reducing the amount of illumination one ultimately reduces the amount of signal too.

Alternatively, the opposite is true if we wish to maximize the resolution of our image. In
order to best compensate for various lens aberrations a short working distance, strong final
lens strength, and large primary beam angle (thus large aperture opening) results in the
smallest possible spot size and therefore the highest resolution in the plane of optimal focus.
Of course this means that the size of the beam changes radically even a short distance above
or below the focus plane and depth of field is drastically reduced. We are thus forced to
make a decision between resolution and depth of field since the parameters that improve one
tend to reduce the other. The operator of the SEM must therefore carefully balance and
adjust all the variables (working distance, final lens aperture size, accelerating voltage,
specimen coating, etc.) in an attempt to maximize resolution while retaining sufficient depth
of field. Prior knowledge of what the investigator is hoping to learn from a given sample
and setting up the SEM accordingly can save hours of time and frustratingly groping about
for acceptable results.
Electron Beam-Sample Interactions
Ultimately image formation in an SEM is dependent on the acquisition of signals produced
from the interaction of the specimen and the electron beam. These interactions can be
broken down into two major categories: 1) those that result in elastic collisions of the
electron beam on the sample (where instantaneous energy Ei = Eo or initial energy) and 2)
those that result in inelastic collisions [where Ei < Eo]. In addition to those signals that are
utilized to form an image, a number of other signals are also produced when an electron
beam strikes a sample. We will discuss a number of these different types of beam-specimen
interactions and how they are utilized. But first we need to examine what actually happens
when a specimen is exposed to the beam.

To begin with we refer to the illumination beam as the “primary electron beam”. The
electrons that comprise this beam are thus referred to as primary electrons. Upon contacting
the specimen surface a number of changes are induced by the interaction of the primary
electrons with the molecules in the sample. The beam is not immediately reflected off in the
way that light photons might be in a dissecting (light) microscope. Rather the energized
electrons penetrate into the sample for some distance before they encounter an atomic
particle with which they collide. In doing so the primary electron beam produces what is
known as a region of primary excitation. Because of its shape this region is also known as
the “tear-drop” zone. A variety of signals are produced from this zone, and it is the size and
shape of this zone that ultimately determines the maximum resolution of a given SEM
working with a particular specimen.

The various types of signals produced from the interaction of the primary beam with the
specimen include secondary electrons, backscatter electrons, Auger electrons, characteristic
X-rays, and cathodluminescence photons. We will discuss each of these in turn.

Secondary Electrons:

The most widely utilized signal produced by the interaction of the primary electron beam
with the sample is the secondary electron emission signal. A secondary electron is produced
when an electron from the primary beam collides with an electron from a specimen atom
and loses energy. This will serve to excite (or perhaps ionize) the atom and in order to re-
establish the proper charge ratio following this event an electron may be emitted. Such
electrons are referred to as “secondary” electrons. Secondary electrons are characterized
from other electrons by having an energy of less than ~50 eV.

This is by far the most common type of signal used in modern SEMs. It is most useful for
examining surface structure and gives the best resolution image of any of the scanning
signals. Depending on the initial size of the primary beam and various other conditions
(composition of sample, accelerating voltage, position of specimen relative to the detector) a
secondary electron signal can resolve surface structures on the order of 1.0 nm or so. This
topographical image is dependent on how many of the secondary electrons actually reach
the detector. Although an equivalent number of secondary electrons might be produced as a
result of collisions between the primary electron beam and the specimen, secondary
electrons that are prevented from reaching the detector will not contribute to the final image
and these areas will appear as shadows or darker in contrast than those regions that have a
clear electron path to the detector. This is the primary contrast mechanism in normal
secondary electron imaging.

One of the major reasons for sputter coating a non-conductive specimen is to increase the
number of secondary electrons that are emitted from the sample. This will be discussed
later.

Secondary Electron Detector:


In order to detect the secondary electrons that are emitted from the specimen a specialized
detector is required. This is accomplished by a complex device that first converts the energy
of the secondary electrons into photons. It is referred to as a scintillator-photomultiplier
detector or “Everhart-Thornley” detector (after its developers). The principle component
that achieves this is the scintillator. The scintillator is composed of a thin plastic disk that is
coated or doped with a special phosphor layer that is highly efficient at converting the
energy contained in the electrons into photons. When this happens the photons that are
produced travel down a Plexiglas or polished quartz light pipe and out through the specimen
chamber wall. The outer layer of the scintillator is coated with a thin layer [10-50 nm] of
aluminum. This aluminum layer is positively biased at approximately 10 KV and helps to
accelerate the secondary electrons towards the scintillator. The aluminum layer also acts as a
mirror to reflect the photons produced in the phosphor layer down the light pipe. The
photons that then travel down the light pipe are amplified into an electronic signal by way of
a photocathode and photomultiplier. The signal thus produced can then be used to control
the intensity (brightness) on the CRT screen in proportion to the number of photons
originally produced.
A photomultiplier tube or PMT consists of a cathode which converts the quantum energy
contained within the photon into an electron by a process known as electron-hole
replacement. This generated electron then travels down the PMT towards the anode striking
the walls of the tube as it goes. The tube is coated with some material (usually an oxide) that
has a very low work function and thus generates more free electrons. This results in a

cascade of electrons and eventually this amplified signal strikes the anode. The anode then
sends this amplified electrical signal to further electrical amplifiers. The number of cascade
electrons produced in the PMT is dependent on the voltage applied across the cathode and
anode of the PMT. Thus it is in the PMT that the light produced by the scintillator detector
is amplified into electrical signal and thus producing gain. We can turn up the gain by
increasing the voltage to the PMT which is essentially what we do when we adjust the
contrast. A baseline amplifier increases the overall electrical signal from the PMT by a
constant amount thus increasing the brightness.
Because secondary electrons are emitted from the specimen in an omni directional manner
and possess relatively low energies they must be in some way collected before they can be
counted by the secondary electron detector. For this reason the secondary electron detector
is surrounded by a positively charged anode or Faraday cup or cage that has a potential
charge on it in the neighborhood of 200 V. This tends to attract many of the secondary
electrons towards the scintillator. It also helps to alleviate some of the negative effects of the
scintillator aluminum layer bias which, because it is so much greater (10 KV vs. 200 V), can
actually distort the incident beam. A second type of electron, the backscattered electron
[which we will discuss later], is also produced when the specimen is irradiated with the
primary electron beam. Together backscattered and secondary electrons contribute to the
signal that reaches the scintillator and form what we refer to as the secondary electron
image.

A rather new usage of secondary electrons is employed in “Environmental SEMs.” Unlike a


conventional SEM the environmental SEM is designed to image specimens that are not
under high vacuum. In fact for an environmental SEM to function properly there must be air
or some other gas molecules present in the specimen chamber. The way an environmental
SEM works is by first generating and manipulating a primary beam in much the same way
as in a conventional SEM. The primary beam then enters the specimen chamber through a
pressure limiting aperture (PLA) that is situated beneath the final lens pole piece. This PLA
allows the chamber to be kept at one pressure (e.g. 0.1 ATM) while the rest of the column is
at a much higher vacuum (e.g. 10-6 Torr). The primary beam strikes the specimen and
produces secondary and backscattered electrons in the same manner as does a conventional
SEM. The difference is that these secondary electrons then strike gas molecules in the
specimen chamber which in turn produce their own secondary electrons or “environmental
electrons”. This results in a cascading or propagation effect and greatly increases the
amount of signal. It is all of these electrons that are then used as signal by the detector that is
positioned near the final aperture. Because of this unique design wet or even uncoated living
specimens can be imaged in an “ESEM”.

Backscattered Electrons:

A backscatter electron is defined as a primary electron which has undergone a single or


multiple scattering events and escapes with a large fraction of its energy. Backscattered
electrons are produced as the result of elastic collisions with the atoms of the sample and
usually retain about 80% or more of their original energy. The number of backscattered
electrons produced (backscatter yield) increases with atomic number of the specimen. For
this reason a sample that is composed of two or more elements which differ significantly in
their atomic number will produce an image that shows differential contrast of the elements
despite a uniform surface topography. Elements that are of a higher atomic number will
produce more backscattered electrons and will therefore appear brighter than neighboring
elements.
The region of the specimen from which backscattered electrons are produced is considerably
larger than it is for secondary electrons. For this reason the resolution of a backscattered
electron image is less (~1.0 um or so) than it is for a secondary electron image (1.0 nm).
Because of their greater energy, backscattered electrons can escape from much deeper
regions of the sample than can secondary electrons hence the larger region of excitation. By
colliding with surrounding atoms of the specimen some backscattered electrons can also
produce x-rays, Auger electrons, cathodluminescence photons, and even additional
secondary electrons.

One type of detector for backscattered electrons is similar to that used in the detection of secondary
electrons. Both utilize a scintillator and photomultiplier design. The backscatter detector differs in
that a biased Faraday cage is not employed to attract the electrons. Only those electrons that travel
in a straight path from the specimen to the detector go towards forming the backscattered image. In
some cases simply turning off the secondary electron detector’s Faraday Cage potential will make a
“poor man’s” backscattered electron detector. More commonly, dedicated backscattered electron
(BSE) detectors are installed in SEM chambers.

Another type commonly found is a solid state Silicon detector. The operation of a Si diode-type
BSE detector is based on the fact that an energetic electron hitting a semiconductor will tend to
loose its energy by producing electron-hole pairs. The number of electron-hole pairs will be
dependent on the backscattered electron energy, so higher energy electrons will tend to contribute to
the signal more. The simplest diode detector is a p-n junction with electrodes on the front and back
of the sample. The holes will tend to migrate to one electrode, while the electrons will migrate to the
other, thus producing a current, the total of which is dependent on the electron flux and the electron
energy. Response time of the detector can be improved by putting a potential across the diode, at the
expense of increased noise in the signal. Diode detectors are frequently mounted on the microscope
polepiece (as the greatest BSE yield is typically straight up for a horizontal surface), and it is
common to break them into a number of sections which may be individually added to or subtracted
to form the output the signal.

By using these detectors in pairs or individually, backscattered electrons can be used to


produce a topographical image that differs from that produced by secondary electrons.
Another type of backscatter detector uses a large angle scintillator or “Robinson” detector
that sits above the specimen. Shaped much like a doughnut the beam enters through the
center hole and backscattered electrons are detected around its periphery.
Because some backscattered electrons are blocked by regions of the specimen that
secondary electrons might be drawn around, this type of imaging is especially useful in
examining relatively flat samples.

Characteristic X-rays:

Another class of signals produced by the interaction of the primary electron beam with the
specimen comes under the category of characteristic X- rays. When an electron from an
inner atomic shell is displaced by colliding with a primary electron, it leaves a vacancy in
that electron shell. In order to re-establish the lowest energy state in its orbitals following an
ionization event, an electron from an outer shell of the atom may “fall” into the inner shell
and replace the spot vacated by the displaced electron. In doing so this falling electron loses
energy which is dissipated as a photon, usually in the x-ray energy range.
The SEM can be set up in such a way that the characteristic X-ray of a given element is
detected and its position recorded or “mapped.” These X-ray maps can be used to form an
image of the sample that shows where atoms of a given element are localized. The
resolution of these X-ray maps is on the order of greater than 1 um. More on this later.

In addition to characteristic X-rays, other X-rays are produced as a primary electron


decelerates in response to the Coulombic field of an atom. This “braking radiation” or
Bremsstrahlung X-ray is not specific for the element that causes it and so these X-rays do
not contribute useful information about the sample and in fact contribute to the background
X-ray signal.

Auger Electrons (AE):

Auger electrons are also produced when an outer shell electron fills the hole vacated by an
inner shell electron that is displaced by a primary or backscattered electron. The excess
energy released by this process may be carried away by an Auger electron. Because the
energy of these electrons is approximately equal to the difference between the two shells,
like x-rays, an Auger electron can be characteristic of the type of element from which it was
released and the shell energy of that element. By discriminating between Auger electrons of
various energies Auger Electron Spectroscopy (AES) can be performed and a chemical
analysis of the specimen surface can be made. Because of their low energies, Auger
electrons are emitted only from near the surface. They have an escape depth of between 0.5
to 2 nm making their potential spatial resolution especially good and nearly that of the
primary beam diameter. One major problem associated with this is the fact that most SEMs
deposit small amounts (monolayers) of gaseous residues (remnants inside the vacuum
chamber) on the specimen which tend to obscure those elements on the surface. For this
reason a SEM that can achieve ultrahigh vacuum (10-10 Torr) is required. Also the surface
contaminants of the specimen must be removed in the chamber to expose fresh surface. To
accomplish this further modifications to the SEM (ion etching, high temperature cleaning,
etc.) are needed. Unlike characteristic X-rays, Auger electrons are produced in greater amounts by
elements of low atomic number. This is because the electrons of these elements are less tightly
bound to the nucleus than they are in elements of greater atomic number. Still the sensitivity of AES
can be exceptional with elements being detected that are only present in hundreds of parts per
million concentration.

Cathodluminescence (CL):

Certain materials (notably those containing phosphorous) will release excess energy in the
form of long wavelength photons when electrons recombine to fill holes made by the
interaction of the primary beam with the specimen. By collecting these photons using a light
pipe and photomultiplier (similar to the ones utilized by the secondary electron detector)
these photons of visible light energy can be detected and counted. An image can be built up
in the same point by point manner too. Thus despite the similarity of using a signal of light
to form the final image, resolution and image formation are unlike the image formed in a
light optical microscope. The best possible image resolution using this approach is estimated
at about 50 nm. Usually complex parabolic reflectors are used to enhance the collection
efficiency of CL photons.

Specimen Current (SC):

One rather elegant method of imaging a specimen is by means of measuring specimen


current. Specimen current is defined as the difference between the primary beam
current and the total emissive current Isc= I0-(IBSE + Isec + IAuger+ etc..). Thus specimens that
have stronger emissive currents have weaker specimen currents and vice versa. Imaging by way of
specimen current has the advantage that the relationship of the detector to the position of the
specimen is irrelevant since the detector and is actually within the specimen. It is most useful for
imaging material mosaics at very small working distances. Also, since most of the initial beam
energy either goes into making BSE or specimen current, the latter is just about the inverse
of the former. In fact by electronically inverting the SC signal you end up with a close
approximation of the BSE signal.

Transmitted Electrons (TE):


Yet another method that can be used in the SEM to create an image is that of transmitted
electrons. Like the secondary and backscattered electron detectors, the transmitted electron
detector is comprised of scintillator, light pipe (or guide), and a photomultiplier. The
transmitted electron detector differs primarily in its position relative to the specimen, which
in the case of TE would be below the sample. The sample would, of course, need to be thin
enough to allow electron transmission (usually <1um).
Shape of Electron Beam Interaction Zone
As each of the electrons of the primary beam strike the specimen they are deflected and slowed
through interactions with the atoms of the sample. In order to calculate a hypothetical trajectory of a
primary beam electron within a specimen a “Monte Carlo” simulation is performed. Using values
for mean free path, angle of deflection, change in energy, and likelihood of a given type of collision
event for a primary electron, the trajectory can be approximated using a random number
factor (hence the name Monte Carlo) to predict the type of collision. By performing this
simulation for a number (100 or greater) of primary electrons of a given energy striking a
specimen of known composition, the geometry of the region of primary electron interaction
can be approximated.

The size and shape of the region of primary excitation is dependent upon several factors, the
most important of which are the composition of the specimen and the energy with which the
primary electrons strike the sample. A primary electron beam with a high accelerating
voltage will penetrate much more deeply into the sample than will a beam of lower energy.

Likewise, the shape of the primary excitation zone will vary depending on the atomic
weight of the specimen. Materials that have a higher atomic number are significantly more
likely to collide with the primary electron beam than those of a low atomic weight. This will
cause the electron to undergo more interactions (shorter mean free path), of a different
nature (greater change in angle and loss of energy) than would the same electron in a
specimen of lower atomic number. A beam interacting with such a sample would therefore
not penetrate as deeply as it would into a specimen of a lower atomic weight.
Low Z

Depth of Penetration vs. Average Atomic Number

Another factor that affects the geometry of the primary excitation zone is the incoming
angle of the incident beam. Because the tendency of the electrons to undergo forward
scattering causes them to propagate closer to the surface than a head on beam the resulting
signal comes from a slightly smaller area. This is another reason for tilting the sample
slightly towards the detector.

Finally, the dimensions of the tear-drop zone are dependent on the diameter of the incoming
spot. The smaller the initial spot, the smaller will be the region of primary excitation.
Because the tear-drop zone is always larger than the diameter of the primary beam spot this
explains why the resolution of an SEM is not equivalent to the smallest beam spot but is
proportional to it.

Of the various types of signals produced from interactions of the primary beam with the specimen,
each has a different amount of energy associated with it. Because of this and because different
signals are more or less transmissive or absorbable by the sample, they are emitted from different
regions of the region of primary excitation. At the top of the tear drop near the very surface of the
specimen is the region from which Auger electrons are emitted. Because they have such a low
energy, Auger electrons cannot escape from very deep in the sample even though they may be
created there by primary or even backscattered electrons. This narrow escape depth explains why
Auger electron spectroscopy is only useful for resolving elements located in the first monolayer of a
specimen and why their resolution is nearly the same as size of the primary electron beam. Beneath
the region from which Auger electrons are emitted is the region of secondary electron emission.
Because they have a higher energy and therefore a greater escape velocity the region of secondary
electron emission is not only deeper into the specimen but broader in diameter than the zone of
Auger electron emission. The two regions are not mutually exclusive and secondary electrons are
emitted from the uppermost elements of the sample as well.

Backscattered electrons have an even greater energy than either secondary or Auger
electrons. Consequently they are capable of escaping from even greater depths within the
sample. For this reason the depth and diameter of the region from which backscattered
electrons are emitted is greater than that for secondary electrons and the resulting resolution

from a backscatter image is that much less. The deepest usable signal caused by penetration
of the primary beam comes in the form of characteristic X-rays. Because the final size of
such an X-ray emission zone is so large that the resolution that can be obtained is usually
quite poor. Despite this however, characteristic X-rays can provide valuable information
about the chemical composition of a specimen even in cases where a thin layer of some
other material (i.e. gold-palladium for conductivity) may be deposited on top. One other
signal, the “white X-rays” or “X-ray continuum” is also produced when the nucleus of an
atom scatters electrons (primary or backscattered) and releases excess energy. Because it is
not characteristic of the element that formed it, the X-ray continuum is merely a form of
background signal that must be accounted for in measuring characteristic X-rays.

Sample Preparation Techniques


The SEM system is particularly useful because many sample types are easily viewable using little or
no sample preparation. There are times, however, that samples will need some preprocessing to be
compatible with the SEM system. In particular biological materials and chunks of otherwise non-
conducting materials will generally need some additional preparation.

For biological materials the first hurdle to overcome is the presence of water in living tissue. While
water is good for life it is bad for vacuum systems. As a result biologicals need to have all the water
removed before going into the SEM vacuum chamber. If this is not done, the water will still leave
your sample but it will destroy the surfaces as it leaves (imagine a sample outgassing violently or
“popping”).
Critical Point Drying

In removing water in sample preparation it is also necessary to do so without disrupting the


surfaces. This means that you can’t (generally) just heat up a biological to dry it. After toughening-
up the tissues with an appropriate fixation method, you need to go through what is called a
“dehydration series” where the water is gradually replaced with another solvent that can be
conveniently removed. Commonly a graded ramp of alcohol (or acetone)-water is used. This
leaves the samples solvent wet. These solvents can be replaced with liquid CO2 in a high pressure
bomb, which can then removed by transitioning the temperature and pressure of the bomb past the
“critical point” in the CO2 phase diagram. By doing this the interfacial tension between the liquid
and gas phase is eliminated. Subsequent venting of the gas phase will then leave the samples dry of
water, solvent, and CO2. This process is called “critical point drying”.
CO2 Phase Diagram
HMDS (hexamethyldisilazane ) Drying
An alternate procedure for drying wet samples is to follow the SOP for HMDS replacement of the
transitional fluid. Below is a commercial example of the procedure:

Note: Up to the point of using the HMDS the procedure is identical to that for CPD.

Another (more conservative) HMDS protocol is:

1. Initially fix and store in 70% ethanol.


2. 95% ethanol 20-30 minutes.
3. 100% ethanol (3 changes) 20-30 minutes.
4. 100% ethanol : HMDS 2:1, 30 minutes
5. 100% ethanol : HMDS 1:2, 30 minutes
6. 100% HMDS (3 changes) 60 minutes each
Remove HMDS from samples and leave to dry overnight in fume hood.
The second hurdle to overcome is common to both biologicals and electrically insulating non-
biologicals. This is the fact that when we use an electron beam for irradiation, a localized charge
can be induced on the surfaces of a sample. To avoid this situation non-conductors usually need to
have a conformal conductive layer of material that does conduct applied. This is done using some
type of vacuum deposition equipment.

Specimen Coating
A coating serves a number of purposes including; a) increased conductivity, b) reduction of
thermal damage, c) increased secondary and backscattered electron emission, and d)
increased mechanical stability.

Conductivity is the single most important reason for coating a specimen. As the primary
beam interacts with a specimen the electrical potential must be dissipated in some way. For
a conductive specimen such as most metals this is not a problem and the charge is conducted
through the specimen and eventually is grounded by contact with the specimen stage. On the
other hand non-conductive specimens or “resistors” cannot dissipate this excess negative
charge and so localized charges build up. This gives rise to an artifact known as charging.
Charging results in the deflection of the beam, deflection of some secondary electrons,
periodic bursts of secondary electrons, and increased emission of secondary electrons from
crevices. All of these serve to degrade the image. In addition to coating the sample, the
specimen should be mounted on the stub in such a way that a good electrical path to ground
is established. This is usually accomplished through the use of a conductive adhesive such
as silver or colloidal carbon paint and conductive tapes.

A conductive coating can also be useful in dissipating the heating that can occur when the
specimen is bombarded with electrons. By rapidly transferring the electrons of the beam
away from the region being scanned, one avoids the build up of excessive heat.

Because secondary electrons are more readily produced by elements of a high atomic
number than by those of a low atomic number a thin coating on a specimen can result in a
greatly improved image over what could be produced by a similar uncoated specimen. In
cases where backscattered electrons or characteristic X- rays are of primary interest a
coating of heavy metal such a gold or gold/palladium can obscure differences in atomic
number that we might be trying to resolve. In this case a thin coating of a low atomic
number element (e.g. carbon) serves the purpose of increasing conductivity without
sacrificing compositional information.

The fourth and final purpose of using conductive coatings is to increase mechanical stability.
Although this is somewhat related to thermal protection, very delicate or beam sensitive
specimens can benefit greatly from a thin layer of coating material that actually serves to
hold the sample together. Fine particulates are a prime example of a case where a coating of
carbon or heavy metal can add physical stability to the specimen.

Many of the negative effects of imaging an uncoated specimen may be reduced by using a
lower energy primary beam to scan the sample. Whereas this will tend to reduce such things
as localized charge build up, thermal stress, and mechanical instability it has the distinct
disadvantage of reducing overall signal. By carefully adjusting factors such as accelerating
voltage and spot size, many of these same effects can be reduced but a thin coating on the
specimen is still usually required.

Sputter Coating

One of the most common ways to achieve a conformal conductive coating is to use a low
vacuum device called a sputter coater. In this device a potential of about 1KV is used to
ionize a residual atomic species in a vacuum bell jar. The ions are then accelerated toward a
metallic cathode such that they dislodge metal atoms which uniformly fill the vacuum
chamber. Any samples within this chamber are then coated with a fine grained thin metal
film. One distinct advantage to this technique is that since it is accomplished in a low
vacuum the metal atoms undergo a number of scattering events and have a high probability
of reaching the sample surfaces from a variety of directions, making the coating of edges
and fine structures uniform. Argon is the usual sputtering gas and it is bled into the vacuum
system as a backfill after primary evacuation. This type of coater, therefore, only needs a
simple rotary pump to accomplish the evacuation. The most common metals for the
cathodes in a sputter coater are gold, silver, and a gold-palladium alloy. Although each of
these metals is quite costly a typical foil target will last for years of normal operation. The
usual coating thickness needed for a fully coalesced island film that is adequately
conductive is between 50 and 200 Angstroms. The structure of these coatings are normally
below the resolution needed to view most samples, although at high magnifications
(>200,000x) the islands appear as high spatial frequency roughness.
Sputter Coater

High vacuum evaporation

There are some instances where a sputter coated conductive layer will be inappropriate for
samples. The most common circumstance is when x-ray analysis is to be performed.
Obviously, if a thin metal coating covers a sample the atoms of this film will be excited by
the primary electron beam. They will then contribute to the x-ray emission spectrum.
To avoid this contribution a low atomic number material, like carbon, that is sufficiently
conductive, but with a single low energy peak in the x-ray spectrum can be applied. Carbon,
however, doesn’t sputter too well. It needs to be applied using an evaporative technique in a
high vacuum system. This usually involves a diffusion or turbomolecular pumped bell jar
and high current feedthroughs to heat up carbon rods to their evaporation temperature.
Since evaporation is a “line-of-sight” coating mechanism complex sample rotation and
tilting stages are common for other than flat sample types

Thermal High Vacuum Evaporator


X-ray Microanalysis

X-RAY SIGNAL GENERATION


Signal Origin

The interaction of the electron beam with specimen produces a variety of signals, but the most
useful to electron microscopists are these: secondary electrons (SE), backscattered electrons (BSE)
and x rays. The SE signal is the most commonly used imaging mode and derives its contrast
primarily from the topography of the sample. For the most part, areas facing the detector tend to be
slightly brighter than the areas facing away from the SE detector, and holes or depressions tend to
be very dark while edges and highly tilted surfaces are bright. These electrons are of a very low
energy and very easily influenced by voltage fields.

The BSE signal is caused by the elastic collision of a primary beam electron with a nucleus within
the sample. Because these collisions are more likely when the nuclei are large (i.e. when the atomic
number is large), the BSE signal is said to display atomic number contrast or “phase” contrast.
Higher atomic number phases produce more backscattering and are correspondingly brighter when
viewed with the BSE detector.

X-ray signals are typically produced when a primary beam electron causes the ejection of an inner
shell electron from the sample. An outer shell electron takes its place but gives off an x ray whose
energy can be related to its nuclear mass and the difference in energies of the electron orbitals
involved. The Ka x-ray results from a K shell electron being ejected and an L shell electron moving
into its position. A KB x ray occurs when an M shell electron moves to the K shell. The K B will
always have a slightly higher energy than the Ka and is always much smaller. Similarly, an La x ray
results from an M shell electron moving to the L shell to fill a vacancy (see Figure below). The
occurrence of an LB x ray means that an N shell electron made the transition from the N shell to the
L shell. The LB is always smaller and at a slightly higher energy than the L a. The L-shell x rays are
always found at lower energies than the K lines. Because the structure of the electron orbitals is
considerably more complex than is shown below, there are actually many more L-shell x-ray lines
that can be present (it is not uncommon to see as many as 5 or 6). M-shell x-ray peaks, if present
will always be at lower energies than either the L or K series.

In energy-dispersive spectroscopy (EDS), the x rays are arranged in a spectrum by their energy and
(most commonly) from low atomic number (low energy) to high atomic energy (higher energy).
Typically, the energies from 0 to 10 keV will be displayed and will allow the user to view: the K-
lines from Be (Z = 4) to Ga (Z = 31); L-lines Ca (Z = 20) to Au (Z = 79), and M-lines from Nb (Z =
41) to the highest occurring atomic numbers. From the interpretation of the x-ray signal, we derive
qualitative and quantitative information about the chemical composition of the sample at the
microscopic scale.

Spatial Resolution

The spatial resolutions of these signals are significantly different from each other. The figure below
is a representation of the depth of penetration of the electron beam into a sample. No scale has been
placed on this image, but the depth of penetration increases as the accelerating voltage of the
primary beam is increased. It will also be deeper when the sample composition is of a lower density
and/or is of a relatively low average atomic number. All three of the signals discussed above are
produced throughout this interaction volume provided the beam electrons still have enough energy
to generate it. However, some electrons or x rays may be of lesser energy and generated at a
considerable depth in the sample. Thus, they may be absorbed and not generate a signal that can
escape from the sample.

The SE signal is one that is readily absorbed and therefore we are only able to detect the SE signal
that originates relatively close to the surface (i.e. less than 10 nm). The BSE signal is of higher
energy and is able to escape from a more moderate depth within the sample as shown. The x-ray
signal can escape from a greater depth, although the x-ray signal absorption is actually variable
depending upon its energy. For example, oxygen is of relatively low energy and can only escape
from the near-surface region of the sample, while iron is of significantly higher energy and would
escape from a greater depth. In quantitative x-ray analysis, it is possible to compensate for these
effects with the absorption correction.
Although the discussion thus far has only mentioned the depths from which the signal can emerge,
the width of the signal is proportional to its depth and provides an estimate of the signal resolution.
Because the SE signal that is generated at even relatively shallow depths is absorbed, its resolution
depends primarily on the position of the entering electron beam which has a spread related to the
electron probe diameter or the spot size of the electron beam. The x-ray signal can emerge from
greater depths (especially high-energy x rays) and the lateral spread of the primary beam electron
can be quite large relative to the beam diameter. The only effective way to improve the resolution
of these signals is to decrease the accelerating voltage which will decrease the beam penetration.
The BSE signal is in between these two extremes and its resolution can be improved by decreasing
the spot size to some extent, but the relationship between spot size and resolution is not as direct for
the BSE signal as for the SE signal. The resolution of the BSE signal can also be improved by
lowering the accelerating voltage, although this usually means having to increase the gain of the
BSE detector and may result in a degradation of the signal-to-noise ratio of the image.

Directionality of Signals

All of the signals that emerge from the sample can be considered to be directional to at least some
extent. A directional signal can be recognized in a photomicrograph of a sample that displays
topography because there will be a very harsh contrast such that surfaces that face the detector will
be bright and surfaces that face away from the detector will be dark. If the trajectory of the signal
can be altered to favor detection, or if a symmetric array of detectors is employed, the effect of
directionality is minimized.

The trajectories of the SE signal are influenced by a positive voltage on a wire mesh network in
front of the detector which attracts the SE from the sample, even from surfaces that face away from
the detector. The BSE detector is typically arranged in an array such that they collect signals from a
large, symmetrically arranged area. Some BSE detectors consist of two or more segments and the
appearance of illumination direction in the imaged area changes drastically depending on which
detector or segment is used for imaging. When all segments are selected, the result is a balanced,
symmetric image that does not show an apparent directionality in its illumination.

The x-ray signal is effectively the most directional of all the signals because there is only one
detector and it is usually at a 35 degree angle to the surface of the sample. There is certainly no
simple way to influence the trajectory of x rays to increase the efficiency of the detector. As a
result, if one is trying to collect the x-ray signal from a surface that slopes away from the detector,
the x-ray count rate will be greatly diminished. If a topographic high area is between the imaged
area and the x-ray path to the detector, there will also be few detected x rays. The effects of
directionality for the x-ray signal are greatly diminished when working with polished samples rather
than samples with a large topographic variation.

The Analysis of Rough Surfaces or Particles

There are difficulties associated with trying to analyze samples with anything other than a smooth
surface. The figure on the next page shows a cross-sectional configuration with a detector at right
and a small sample or portion of a sample at the left. The interaction volume is shown when the
beam is at three different locations (‘A’, ‘B’ and ‘C’) and the shaded region represents the area from
which a low-energy signal may escape from the sample without being absorbed. The arrows
represent the flux of these low-energy signals which will be highest at ‘A’ and at ‘C’ but relatively
low at ‘B’.

If the low-energy signal is the secondary electron signal and the detector at the right is the
secondary electron detector (with a positive grid voltage), then the edges near ‘A’ and ‘C’ will
appear brighter than the center of the sample at ‘B’. Alternatively, if the low-energy signal is some
relatively low energy x ray (such as aluminum in a nickel alloy, or oxygen in a mineral, or the
copper ‘L’ x-ray signal as compared to the copper ‘K’ x rays), and the detector is the EDS detector,
then the height of the low energy peak would be highest at ‘C’ and lowest at ‘A’. Note that even
though there would be a high flux of low-energy x rays leaving the sample at ‘A’, that these will not
be detected because there is no detector positioned so as to intercept the signal.

Another difficulty in analyzing particles or rough surfaces is when the surface of the particle has a
slope which is not parallel to the surface of the stage. In the figure shown above, the upper surface
is shown parallel to the stage and our consideration was really centered on what might be regarded
as “edge” effects. However, when the sample surface is inclined toward the detector at a greater
angle than the stage surface, then the take-off angle will be greater than that of a parallel surface. If
our sample consists of a single, non-parallel sloping surface, then its take-off angle could be
determined and an accurate analysis performed, provided that we have some way of knowing the
local surface tilt of the sample. If the sample is rough and consists of many surfaces of variable
orientation, then it is unlikely that a reliable analysis can be performed.
ENERGY DISPERSIVE X-RAY SPECTROMETRY--
EDS INSTRUMENTATION & SIGNAL DETECTION
ALAN SANDBORG
EDAX INTERNATIONAL, INC.

1. X-Ray Detectors:

The EDS detector is a solid state device designed to detect x-rays and convert their energy into
electrical charge. This charge becomes the signal which when processed then identifies the x-ray
energy, and hence its elemental source.

The X-ray in its interaction with solids gives up its energy and produces electrical charge carriers in
the solid. A solid state detector can collect this charge. One of the desirable properties of a
semiconductor is that it can collect both the positive and negative charges produced in the detector.
The figure below shows the detection process.
Figure 1.

There are two types of semiconductor material used in electron microscopy. They are silicon (Si)
and germanium (Ge). In Si, it takes 3.8 eV of x-ray energy to produce a charge pair, and in Ge it
takes only 2.96 eV of energy. The other properties of these two types will be discussed later in this
section. The predominant type of detector used is the Si detector, so it will be favored in the
discussions. With a Si detector, an O K x-ray whose energy is 525eV will produce 525/3.8= 138
charge pairs. A Fe K x-ray will produce 6400/3.8= 1684 charge pairs. So by collecting and
measuring the charge, the energy of the detected x-ray can be determined. The charge is collected to
the terminals electro-statically by a bias voltage applied across the detector. This voltage is 500 to
1000 volts for most detectors. The transit time for the charge is ~50 nanoseconds.

Since the desired signal is made up of moving charge, any stray moving charges (currents) must be
minimized. A semiconductor produces a thermally generated current which must be reduced by
lowering the temperature of the detector to eliminate this current as a noise compared to the signal.
This involves cooling the detector with liquid nitrogen (77 degrees K). Si detectors need only to be
cooled to ~90 degrees K, so some thermal losses can be tolerated. The Ge detector produces more
current because of its smaller band gap (which also causes the signal to be larger), so the cooling
requirement is more critical. Cooling the detectors with mechanical devices is not usually done
when high image resolution is required because the vibration they produce cannot be tolerated.
Liquid nitrogen cooling is almost universally used for these conditions.

In summary, the function of the detector is to convert x-ray energy into electrical charge. Later the
processing of this signal will be discussed.

2. The detector efficiency.


Note in Figure 1 that the detector is made up of layers, some integral to the detector, and some
external to it. An x-ray which strikes the detectors active area must be absorbed by it in order for the
charge signal to be created. The absorption of x-rays in a layer of thickness t is given by Beer's Law,

I / I 0  e (t )
Where :
I = Final Intensity
I 0  Initial Intensity
  mass absorption coefficient
 = density
t = thickness

With this equation the efficiency of the detector can be calculated, taking into account the absorbing
layers in front of the detector (Be or Polymer window, Au metalization, and dead layer) as well as
the thickness and material (Si or Ge)of the active region of the detector. Figure 2 shows a set of Si
detector curves for a various detector thicknesses and an 8 micron Be window. The 3 mm thickness
would be most typical for commercially available detectors.

The window material has the most dramatic effect on the low energy efficiency of either type of
detector. Be windows have been used since the very first EDS detectors, but after the early 1980's,
ultra thin windows made of polymers have also become available, and in the last few years have
become widely used. The Be window is typically between 7 and 12 microns in thickness, with the
very thinnest available about 5 microns thick. These only allow practical detection of the x-rays of
Na (Z=11) and higher atomic numbered elements. The polymer type of super ultra thin windows
(SUTW) allow detection of possibly Be (Z=4), or certainly B (Z=5) and above. Table 1 shows a
comparison of the efficiency of an ultra thin window to a Be window.

Table 1. Transmission of K x-rays through various windows


Window B C N O F
Type
8 micron Be 0% 0% 0% 0% 5%
Ultra Thin 25% 85% 42% 60% 70%
Polymer

If there are contamination layers present on either the window or detector, then the low energy
efficiency will be adversely affected. These layers could be due to oil contamination on the detector
window or ice on the detector itself. All precautions to prevent their formation should be taken.
Each manufacturer will have a procedure to remove them if they are formed.

3. The geometrical efficiency.

The collection of x-rays by the detector is affected by the solid angle that the detector area
intercepts of the isotropically emitted x-rays from the sample. The largest possible solid angle is
desirable, especially when small regions of the sample are being investigated. The solid angle () in
steradians is given by:
  A/d 2
Where:
A= detector area, mm2
d  t he sample to detector distance

The area is most commonly 10 or 30 mm 2 and the sample to detector distance varies from 10mm to
70mm, depending on the specific design of the microscope/detector interface. Solid angles of ~0.3
to 0.02 steradians are most common.

4. Signal Processing:

The charge from each x-ray event entering the detector must be processed and stored in a memory
in order to make up a spectrum from a sample. The processing is outlined below:

a. preamplifier:
The preamplifier has the function of amplifying the charge signal and converting it to a voltage
signal. A schematic is shown on the following page.

C
Output

FET

Reset

Detector
The preamplifier.

The FET is mounted adjacent to the detector, and is cooled to ~ 120 degrees K to reduce its
electrical noise. The configuration shown is called a charge sensitive amplifier, and it converts
charge at its input to a voltage output. C is a feedback capacitor. The FET must be reset periodically.
This is accomplished by various methods, the most common being the pulsed optical feedback
method.
Voltage
(mv)

Time ------>
Output signal of an x-ray event

The output signal of an x-ray is shown above. The signal is small, and has noise from the FET
added to it. The voltage step v, contains the size of the x-ray event, and it must be processed further
to amplify its size, and increase its signal to noise ratio.

b. The Signal processor.

The processor is usually an analog amplifier system. Digital signal processors are not yet in
widespread use, so will not be considered here. The amplifier system has several functions:

1. Amplify the signal to 0-10 volt range.


2. Filter the signal to optimize the signal to noise ratio (optimize the energy resolution).
3. Detect x-ray events that overlap or "pileup' on one another, creating erroneous information.

The filtering can be done with various kinds of networks, producing pulses that are Gaussian
shaped, triangularly shaped or trapezoidally shaped pulses. The networks can have several different
time constants available. The effect is to produce pulses that are about 100 microseconds wide in
order to achieve the best energy resolution. This increases the probability of pulse pileup and the
need to correct for it.

In order to detect the presence of pulse pileup, a second amplifier system is used just to detect x-ray
events. This is often called an inspector channel. This inspector channel senses when two events
will pileup, and then rejects these events. The quality of the spectrum is maintained, but dead time is
introduced, which increases at higher count rates.

For each time constant of the amplifier, a maximum throughput exists, which should not be
exceeded, but which can be approached. If a higher count rate is desired, then a smaller time
constant can be used, which will have a higher throughput, but a poorer resolution.

Throughput curves for two time constants are shown below.


Thruput Curves

8000

7000

6000

5000
Stored CPS

50usec
4000
100usec

3000

2000

1000

0
0 5000 10000 15000 20000 25000 30000

Input CPS

c. The Multichannel analyzer.

The processed pulses are digitized using an Analog to Digital Converter. This device measures the
voltage height of each event and sorts the events into a multichannel memory. This memory is
organized such that each channel represents 10ev of energy. From this digitized spectrum, the x-ray
intensities from each element can be obtained. A spectrum is shown below.

5. Energy Resolution.

The resolution of the peaks observed generally follows the following relation:
FWHM= SQRT[(FWHM) noise (2.35 F E  ) 2 ]
2

Where:
F= f ano f actor= 0.11
E= energ y of t he x-ra y, e v
 = 3.8e v/charge pair (Si), 2.96e v/charge pair (Ge)

FWHM is the full width of the peak at half its maximum in ev. The noise term is due to the
electronic noise, primarily from the FET after signal processing has been applied. Its value can vary
from 40 eV to 100 eV, depending on the detector material, size, and age of system. A typical
detector may give resolutions as shown below. Older detectors may show poorer resolution below 2
KeV because of dead layer effects. Newer detectors show performance closer to the theoretical
values due to improved manufacturing techniques.

Resolution vs Energy for 70ev noise

250.00
200.00
FWHM, ev

150.00
FWHM
100.00
50.00
0.00
0 5 10 15 20

Energy, Kev

6. Collimation:

The detector must be shielded from x-rays produced from locations at other than the sample. X-rays
are being produced wherever scattered electrons are striking portions of the column. The electron
microscope itself must be designed to minimize such x-rays from reaching the detector through the
sides of the detector. Also a collimator is designed to prevent the detector from "seeing" materials
around the sample which may be exposed to scattered electrons or x-rays from apertures in the
column. In special cases in the transmission electron microscope, a specially designed low
background sample holder is used which uses Be or another low atomic number alloy.

References:

1. X-Ray Spectrometry in Electron Beam Instruments (D. Williams, J. Goldstein, and D. Newbury,
eds.) Plenum Press, New York (1995)

2. Principles of Analytical Electron Microscopy (D. C. Joy, A. Romig, Jr., and J. Goldstein, eds.)
Plenum Press, New York (1989)
Important EDS Parameters
Count Rate

For a good quality spectrum (i.e. good resolution and fewest artifacts) you should use the 50 or 100
us time constant (pulse processing time) with a deadtime of 20 to 40%, and 500 to 2500 cps. These
are good numbers if the sample consists largely of high energy peaks (> 1 keV), but if the spectrum
is dominated by low energy peaks (< 1 keV) then a count rate of 500 - 1000 cps is better and the
100 us time constant should be used.

When maximum count throughput is required, such as when collecting fast x-ray maps, a faster time
constant (2.5 to10 us) should be used with a count rate of 10,000 to 100,000 cps. The deadtime
should not exceed 50 to 67%. These conditions may not be optimum for low energy peaks in terms
of their resolution and/or position. A lower count rate and slower time constant should be and/or it
might be necessary to adjust the position of the ROI prior to collecting the map.

Accelerating Voltage

The overvoltage is a ratio of accelerating voltage used to the critical excitation energy of a given
line for an element. Typically, the overvoltage should be at least 2 for the highest energy line and
no more than 10 to 20 times the lowest energy line of interest. We use the number 10 for
quantitative applications and the 20 for qualitative applications.

For example, if you are interested in analysis of a phase containing Fe, Mg and Si and want to use
the K lines for each, then 15 kV will probably work reasonably well. If, however, you need to
analyze the same three elements plus oxygen as well, then you might use 5 to 10 kV, but you might
want to use the L line for the Fe.

Why should the overvoltage be at least 2 for the highest energy element? Because at lower
overvoltages the fraction of the interaction volume where the element can be excited becomes very
small and you will not be able to generate very many x rays of that energy.

Why should the overvoltage be less than 10 to 20 times the lowest energy peak? When the
overvoltage number is excessive, the proportion of the interaction volume for which the low energy
x rays can escape without being absorbed also becomes small. The result is a small peak and in the
case of the quantification there will be a strong absorption correction which will magnify the
statistical errors in our analysis.

Take-Off Angle

Typical take-off angles will range from 25 to 40 degrees. This angle is a combination of the
detector angle, its position, sample working distance and sample tilt. The sensitivity for very low
energy x rays and/or signals characterized by high absorption can be enhanced by increasing the
take-off angle. Some inclined detectors (e.g. a detector angle of approximately 35 degrees above
the horizontal) do not require sample tilt. Horizontal entry detectors require that the sample be
tilted to achieve an optimum take-off angle.
EDAX Detector Geometry

The elevation angle (EA) is the angle between the horizontal and the detector normal. The
intersection distance (ID) is the distance in mm between the pole piece where the electron beam
intersects the detector normal. The azimuth angle can not be shown in a cross-sectional view as
shown at the left, but it is the angle as viewed from above between the detector normal and normal
to the tilt axis. The working distance (WD) is where the sample is in mm below the pole piece. The
take-off angle (TOA) is the angle between the x-ray trajectory and the sample surface. If the sample
is placed at the intersection distance and not tilted, the take-off angle will equal the elevation angle.
If the working distance is shorter
than the intersection distance, the
take-off angle will be less than the
elevation angle. This assumes that
the sample is smooth and not tilted.

If the working distance is longer


than the intersection distance, the
take-off angle will be more than the
elevation angle. Again, this assumes
that the sample is smooth and not
tilted.

If the sample is tilted toward the


detect-or, the take-off angle will be
greater than the elevation angle. If
the sample were tilted away from
the detector, the take-off angle
would be less than the elevation
angle. Note that the azimuth angle
must be taken into account to
determine the take-off angle.
Dead Time & Time Constants

In an EDS system the real time (or clock time) is divided into live time and dead time. The live
time is the time when the detector is alive and able to receive an x-ray event (i.e. the time when it
is doing nothing) and the dead time is when the detector or preamplifier is unable to accept a
pulse because it is busy processing or rejecting an event(s). Basically, the charges from an x-ray
photon can be collected in about 50 ns and in the end we will take roughly 50 us to process the
filtered, amplified pulse (1000 times longer). Quite often, x-ray photons will come in too close
to each other and we will reject the signals. That is why we can see the situation that we actually
collect very few x-ray events at very high count rates and we actually process more counts when
we use a lesser count rate.

If we decide to take less time to process the pulse in the end (say, less than a 10 us time constant
or pulse processing time) then we can process more counts. However, because we have taken
less time, there is the possibility that we do not process the peak energy as accurately and the
peaks will be broader; the resolution will be poorer or higher.

The Phoenix system has 8 time constants or pulse processing times which will allow for
optimum resolution or x-ray count throughput. Under most circumstances, we would like to
have a dead time that is between 10 and 40% and perhaps between 20 and 30% if we would like
to tighten the range of our analytical conditions. There are times when we are most concerned
about the count throughput and are not concerned about resolution, sum peaks, etc., such as
when we are collecting x-ray maps. Under these conditions a dead time as high as 50 to 60% is
feasible. Under no circumstances should a dead time that is more than 67% be used because we
will actually get fewer counts processed.

A series of plots are included on the following pages. These show: the fall-off in processed
counts when the count rate is increased too high, a comparison in throughput between a fast time
constant and a slower time constant, and a plot showing the dead time % for various count rates
and time constants, and a plot of count rate versus dead time % for each of the 8 time constants.
EDX SPECTRUM INTERPRETATION AND ARTIFACTS

Continuum X Rays

As a result of inelastic scattering of the primary beam electrons in which the electrons are
decelerated and lose energy without producing an ionization of the atoms in the sample, continuum
x rays are formed. The continuum x rays are the background of our EDS spectrum and are
sometimes referred to as the bremsstrahlung. In theory, the continuum can be expected to extend
from the maximum energy of the primary beam electrons and increase exponentially to zero keV
energy. In reality, the background goes to zero at the low end of the energy spectrum due to
absorption by the detector window, the detector dead layer, and the gold layer. The intensity of the
continuum is related to both the atomic number of the sample as well as the beam energy. The
continuum intensity also increases with beam current.

Characteristic X Rays

Inelastic scattering events between the primary beam electrons and inner shell electrons which
result in the ejection of the electron from the atom within the sample and may lead to the formation
of a characteristic x ray. The ejection of the electron leaves the atom in an ionized, excited state and
permits an outer shell electron to move to the inner shell. Because the energy levels of the various
shells are related to the number of charges in the nucleus, the energy of the emitted x ray is
“characteristic” of the element. The beam electron must have an energy greater that is just slightly
greater than the energy of the shell electron (the critical ionization energy).

Depth of Excitation

Although electrons may penetrate to specific depths within a sample which can be illustrated with a
variety of equations or with Monte Carlo programs, the electrons actually lose energy in steps as
they go to greater depths in the sample. As a result, an electron may soon lose a sufficient amount
of its energy such that it can no longer excite characteristic x rays. Typically, this occurs when its
energy drops below the critical ionization energy of the elements in the sample. Each element
within the sample will have its own critical ionization energy and its own excitation depth. The
ratio of the primary beam energy to the excitation energy of the element is referred to as the
“overvoltage”. Typically, it is thought that the overvoltage should be two or greater for EDS
analysis.

X-Ray Absorption
A consideration of the depth of excitation for low-energy x-rays would lead us to believe that the
low energy x-rays may be produced nearly within the entire depth of penetration of the electron
beam. In actuality, even though this may be true for very low energy x rays, these x rays are also
very easily absorbed by the sample and relatively few of them may actually escape from the sample
except for those that were actually generated near the sample surface. The loss of these absorbed
electrons must be corrected for in the quantitative analysis. The ratio of absorbed to emitted x rays
increases with the accelerating voltage. As a result, absorption considerations also place an upper
limit on our overvoltage --no more than 20 for qualitative work and ideally less than 10 for
quantitative analysis.

X-Ray Fluorescence

Characteristic x rays can be produced by other x rays as well as by high energy electrons. This is
typically referred to as x-ray-induced fluorescence or x-ray fluorescence. For instance, nickel K-
alpha x-rays have an energy near the critical ionization energy of iron K-alpha and will readily
fluoresce iron x-rays. This leads to an increase in the iron peak in the spectrum and a decrease in
the nickel peak beyond what would be expected given the abundance of the these two elements.
The fluorescence correction would correct for this effect by effectively removing some of the iron
x-ray counts and placing them with the nickel x-ray counts.

Detector Efficiency

The efficiency of the EDS detector is controlled by the window type (if any), the gold contact layer
and the silicon dead layer. The super-ultra thin windows (SUTW) will typically permit the
detection of beryllium and boron, but not as efficiently as higher atomic number elements. The
beryllium window is relatively good for K-radiation of elements with atomic number greater than
silicon (Z=14) but drops to what is basically zero for elements less than sodium (Z=11) in atomic
number. Although not much of an issue in the SEM, there can also be inefficient detection for very
high energy x-rays because these x-rays may pass through the entire detector thickness and create
no detectable signal.

X-Ray Artifacts

Peak Broadening. The resolution of the EDX detector is typically specified as the FWHM at
manganese (Z = 25) and is on the order of 140 eV. There is a predictable relationship that peaks of
a lower atomic number will have a lesser FWHM (full width at the half maximum height of a peak)
than peaks at higher atomic numbers. When we calibrate the spectrometer, the Al K and Cu K
are used but the resolution at Mn K is calculated or interpolated. The Al peak will typically have
a much smaller FWHM than the Cu peak.
Peak Distortion / Asymmetry. Peaks in the spectrum will typically show an asymmetry in which the
peak is usually sharper at the high end and this is partly due to the absorption edge of the element.
The tailing on the low-energy side of the peak is due in part to incomplete charge collection (not all
electron-hole pairs are collected –this is thought to be a problem for x rays that strike the detector
near its edges).

Escape Peaks. Occasionally an x ray striking the detector will cause the fluorescence of a silicon x
ray. This may result in two x rays, one with the energy of silicon (1.74 keV) and one with the
original x-ray energy minus the silicon energy. If both x rays remain in the detector, the two x rays
are summed and the correct energy assigned in the multi-channel analyzer. If the silicon x ray
escapes from the detector, then what remains in the detector is called an escape peak which has an
energy that is 1.74 keV less than the original x ray. Only x rays with energies greater than the
absorption edge of silicon (1.84 keV) can cause the fluorescence of silicon, so we can expect to see
the escape peaks associated with K radiation for phosphorus and up in atomic number. The size of
the escape peak relative to its parent peak (typically no more than a percent or two) actually
diminishes at higher atomic numbers as a result of the higher energy x rays tend to deposit their
energy deeper in the detector where the silicon x rays (if generated) have more difficulty escaping
from the detector.

Absorption Edges. The background of our spectrum will typically show a drop or an edge at the
energy associated with the absorption edge of silicon and may show another near the absorption
edge of the gold M line. These result from the passage through the gold layer and silicon dead layer
with a concomitant absorption of the x-ray continuum.

Silicon Internal Fluorescence Peak. It is possible that the silicon dead layer of our detector may
also be fluoresced by an incoming x ray and the resultant silicon x ray may then travel towards the
detector and produce a very small, but recognizable peak. It has been estimated that this peak can
be as high as 0.2 weight percent of some samples. This is difficult to verify or confirm, because just
having the background correction not quite correctly specified, could easily produce a silicon
content this high. Also, given the poor quality of such a small peak, it could be easily confused with
the silicon absorption edge.

Sum Peaks. The sampling interval during which x rays are detected is a few tens nanoseconds. It is
possible that two x rays of the same energy will enter the detector within this time period and be
counted as a single x ray of twice the energy. Compounds can give rise to sum peaks that are the
sum of two different elemental energies. Aluminum alloys will commonly have sum peaks (2.98
keV) that are equal to the position of other elements, such as argon or silver. Sum peaks are
typically problems at higher count rates and when a phase is dominated by a single element. To
determine if a small peak may be a sum peak (i.e. a detection artifact, not representative of the
composition of the sample), you can select sum peaks from the Peak ID control panel and it will
show a marker for the most probable sum peak. To confirm that the peak in question is really a sum
peak, save the peak and use this peak as an overlay. Then, lower the count rate (perhaps to ¼ of
what it was) and collect a new spectrum. If the major peak is of a comparable size in both spectra
and the signal-to-noise appears similar in both cases, but the peak in question is no longer visible or
greatly diminished, then the peak in the first spectrum was a sum peak. If the peak is undiminished,
then it represents a real part of the sample.

Stray Radiation. Stray x rays are x rays that originate anywhere other than where the primary beam
interacts with the specimen and may be produced by a variety of processes. Most occur as a result
of x rays created by high-energy backscattered electrons striking the pole piece, stage, chamber or
another part of the sample outside of the imaged area. While it is possible for these x-rays to enter
the detector, careful collimation of the detector can eliminate most of this problem. Probably more
serious are backscattered electrons that strike the pole piece or stage and are exchanged for another
backscattered electron that returns to strike the sample at some distance from the point of interaction
with the primary electron beam. It is expected that the amount of stray radiation would be greatest
when the sample is a rough surface or is near an edge. Backscattered electrons would be able to
directly strike other surfaces of the sample and create x-rays from these surfaces which may be well
outside of the imaged area. To minimize stray radiation, the sample should be smooth, and located
at the working distance that the detector is pointed at. It also helps to put the backscattered electron
detector in place because this acts as a sink for BSE and will protect the pole piece from being
struck by BSE.

A Warming Detector. The current generation of EDAX detectors have a temperature sensor which
disables the detector when it runs out of liquid nitrogen. Just prior to the point when the detector
will be disabled, you may notice an unusual, asymmetric “low-end noise peak”. At this time, the
detector should have liquid nitrogen added to it. The system can still be used for qualitative work
provided that very low-energy peaks are not the subject of the investigation. It should take
approximately a half hour for the detector to regain the ideal cooled condition. As mentioned, in the
cooling period it can still be used for qualitative work, but the system should not be calibrated or
used for any work where the very best resolution is required.
EDAX Phoenix Peak Identification

The basic procedure for identifying peaks in the EDS spectrum is not always agreed upon. Some
users start with the biggest peak, identify it and all associated peaks, and then progress to the
smaller peaks in sequence until all are identified. When there are L- and M-series peaks present,
it may actually be better to identify the highest energy alpha peak first because this peak may be
associated with several other higher energy peaks as well as one or more of the lower energy
peaks. Then, the user can find the next lowest energy peak, etc., until all peaks have been
accounted for. Another significant aid for peak ID is to click on the button for the automatic
peak ID. The results of the automatic peak ID should be inspected and verified; it very often will
provide the correct answer, but the user should be aware that its answer might be only a part of
the solution and that more investigation may be required. In all cases, however, it is essential to
look for escape peaks for the peaks of greatest height and for sum peaks for these same peaks
when a high count rate has been used for spectrum acquisition.

Typically, we examine that part of the spectrum that ranges in energy between 0 and 10 keV and
we quickly become aware of characteristic features of K-, L- and M-shell x-ray peaks that aid us
in our peak identification. In this energy range, we can see peaks with K-shell x-rays that
correspond to atomic numbers 4 through 32, L-shell peaks that range from roughly atomic
numbers 22 to 79, and M-shell x rays ranging from 56 to the highest known and observable
atomic numbers. For many elements it is possible to observe peaks from more than one shell in
this energy range (it is often desirable to inspect the region between 10 and 20 keV as well for
confirmation of lower energy peaks). The resolution of the peak of one element from an adjacent
atomic number is often better resolved by using higher energy peaks (see spectrum of Ni and Cu
below).

Characteristics of the K - Series Peaks

K-series peaks are typically fairly simple and consist of a Ka and a Kb peak. The Kb peak
occurs at a higher energy and starts to become resolvable from the Ka for atomic numbers
greater than sulfur (Z = 16). The Kb peak is usually 1/8 to 1/6 the size of the Ka peak as can be
seen in the spectrum below for titanium.

Characteristics of the L - Series Peaks

The L-series peaks are much more complex than the K-series peak(s), and may consist of as
many as 6 peaks: an La, Lb1, Lb2, Lg1, Lg2 and Ll. The peaks were arranged (in the latter
sentence) from highest energy to lowest energy with the exception of the Ll peak which occurs
before (to the left of, or at a lower energy than) the La peak (see spectrum for tin on the
following page). Aside from the Ll peak which is a very small peak, the peaks are progressively
smaller with energy. For L-series peaks from elements less than the atomic number of silver
(Z=47), this peak series is not resolvable. One of the common mistakes in peak identification is
to be unaware of the Ll peak and to hypothesize the existence of a minor or trace amount of an
element that does not exist in the sample.
Characteristics of the M - Series Peaks

Most M-series peaks will show only two resolvable peaks, the Ma peak which also includes the
Mb (not resolved) and the Mz peak. Just as with the Ll line in the L-series peaks, the Mz occurs
at a lower energy than the Ma peak and there have been many instances of identifying another
element by the analyst who did not recognize the Mz peak, or who was using a peak
identification method that did not include the Mz peaks. For instance (see following page), the
Mz for osmium is perfect for being a small amount of Al or Br. The Mz for gold is a good fit for
the hafnium Ma peak, etc. M series peaks tend to be markedly asymmetric due to the mixing
together of the Ma and Mb (the Mb being a smaller peak at a higher energy). To perform a
manual peak ID for the M series peak, it is often necessary to click the mouse cursor slightly to
the left of the peak “centroid” because of this. When the peak for the Ma is selected, the lines for
both the Ma and Mb are shown. There may also be an Mg peak which may show up as low,
high-energy shoulder for the large M peak, and a very small “M2N4” peak (see spectrum on the
following page)

Auto Peak ID

Auto peak ID is fast, easy and will be reasonably accurate with K-series peaks that do not have
serious overlaps. Such an overlap would consist of a Ka from a trace element that happens to be
near a major peaks Kb (i.e. a trace of Mn with Cr, or V with Ti, etc.). When the overlap consists
of M, L and K series peaks, the auto peak ID routine is not well suited to get all the elements
correctly. In this case, the use of the peak deconvolution software is recommended (see
following discussion).
The auto peak ID routine makes use of an “omit” list, which is a list of elements that the auto
peak ID will not find (see figure below). This list can be reached from the Peak ID control panel
by clicking on “EPIC” (Edax Peak Identification Chart), the “Auto ID”. It is possible to remove
elements from this list or to add elements. Manual ID will still list an element that is in this omit
list.

Deconvolution and Peak ID

Deconvolution is used in the quantification routine to estimate the size and position of all peaks
associated with each assigned element. The fit of all the peaks calculated is shown as a lined
spectrum on top of the background, so that any serious gaps or misfits can be observed by the
user. The deconvolved spectrum can be observed by clicking on the “Quant” button in the button
bar and “OK” to remove the quantitative results from the display. It may be a good idea that
your “results” mode should not be in “Multiple” mode for this action, since this may require
several iterations and the intermediate results are seldom of interest.

If the spectrum has more counts as a cluster than the deconvolved spectrum, then the manual
peak ID can be used to find a peak that can account for those peaks. In the case below, there are
some counts that are not accounted for by the deconvolution in the area of the Cr Kb. This is
common in a stainless steel and indicates the presence of the Mn Ka peak. It is also possible to
subtract the deconvolution from the spectrum (click on “Proc” and “Subtract deconvolution”).
This will make it more clear where a peak’s centroid should be. To undo the subtraction, click
on “Edit” and “Undo”.

The peak-fitting routine used is most accurate for the K-series peaks. The complexity of the L-
series peaks does not permit as accurate of a fit, but in most cases this fit will be more than
adequate to identify the peaks.
Quantitative X-Ray Analysis using ZAF

Spectra collected with the DX system can be analyzed to provide weight and atomic percent data.
Standards can be used for this analysis or the sample may be analyzed without standards (i.e.
“standardless”). In either case, the procedures are very similar and will be discussed below.

Figure 1. Spectrum of an aluminum-chromium-nickel alloy collected at 15 kV.

When many users attempt to interpret an EDS spectrum, there is often a tendency to infer weight
percent from a visual estimate of the peak height and this usually leads to erroneous interpretations.
For instance, in the spectrum in Fig. 1, the Cr peak is about 1 ½ times larger than the Ni peak which
is about 2 ½ times larger than Al peak. Whereas, the correct analysis actually has the Ni being
about 1 ½ times the Cr and the Ni is just about 20 times more abundant by weight percent than the
Al. Even experienced micro-analysts that have an extensive knowledge of overvoltage and
absorption considerations would be reluctant to infer relative weight percents from a spectrum such
as this.

The correct interpretation of a spectrum would actually be to compare an element’s peak height to
the height of the pure element collected under identical conditions. Although this is commonly not
an easy thing to do, the quantitative analysis actually does this comparison in its calculation (this is
shown graphically in Fig. 2 below).
Fig. 2. Same spectrum as in Fig. 1 (shown in solid black) compared to the outline of the pure
element intensities for each element.

The ratio of the element’s peak height in a sample to the pure element collected under the same
conditions is actually used in the quantitative calculation as the k-ratio. All other steps in the
quantitative analysis with and without standards are identical and the only difference is whether the
pure elements are actually measured (with standards) or are calculated (standardless). Before the k-
ratios are calculated, the peaks are identified, a background is fit, a peak-fitting routine is used and
overlapped peaks are deconvoluted, and the net count intensities are measured. After the k-ratios
are calculated, the ZAF corrections are applied, where: Z is an atomic number correction which
takes into account differences in backscattered electron yield between the pure element and the
sample (high atomic number pure elements will produce fewer x rays because some of the beam
electrons leave the sample before losing all of their energy); A is the absorption correction which
compensates for x rays generated in the sample but which are unable to escape when they are
absorbed within the sample (low-energy x rays tend to be heavily absorbed); and F is the
fluorescence correction which will correct for the generation of x rays by other x rays of higher
energy. In order to calculate the ZAF factors, the composition must be known, but we need to know
the ZAF factors to calculate the composition. This apparent contradiction is resolved by calculating
ZAF and the composition through a series of iterations until the last iteration makes no effective
difference in the result.

Table 1. Quantitative results for the spectra in Figs. 1 and 2.

Element Wt % At % K-Ratio Z A F
AlK 3.4 6.84 0.0159 1.0897 0.4277 1.0003
CrK 37.96 39.6 0.3887 0.9866 0.9911 1.047
NiK 57.97 53.56 0.5688 1.0014 0.9798 1
Total 99.343 100

The actual weight % is calculated by multiplying the k-ratio by 100% and dividing by the product
of the Z, A and F factors (Table 1). The standardless results will always be normalized to 100%,
while it is possible to provide a non-normalized result when standards are used.

Standards can be pure element standards (one standard for each element) or a compound standard
(consisting of multiple elements) can be used. The pure element standard will provide the pure
element intensity directly. The spectrum from a compound standard has a known peak intensity and
known compositions. The ZAF coefficients can be calculated from the known composition and the
ZAF factors are applied in reverse (RZAF) to calculate the pure element intensities.

The conditions used to collect the standard or standards and the unknown(s) must be constant. The
detector-sample geometry must be the same, the accelerating voltage used must be the same, and
the beam current should ideally be the same. The maintenance of the first two conditions as
invariant is relatively simple. Slight variations of the beam current of a few percent will cause an
inaccuracy of the final analysis when pure element standards are used. If a single compound
standard is used and the results are normalized, the results will usually still be of good quality. The
quality of the standardless analyses will not typically be affected by an unstable beam. When it is
desirable or necessary to obtain non-normalized results, the beam current should be monitored with
a specimen current meter and with a Faraday cup on the sample stage or in the microscope column.
A Faraday cup can be constructed for the stage by drilling a hole into a carbon planchet or a sample
stage and placing an aperture on top of the cavity. The depth of the cavity should be at least 6X the
diameter of the aperture. If the beam is placed within the opening, then the entire primary beam is
captured and conducted through the meter to ground and there will be no secondary or
backscattered electrons leaving the sample.
Introduction to Digital Imaging
Digital images are defined by the number of pixels (picture elements) in a line across the image, by
the number of lines and the number of colors or gray levels in the image. The number of gray levels
is usually expressed in bits (8, or 12 bits) which is the exponent used with the number 2. An 8 bit
image has 256 levels (0 - 255), and a 12 bit image would have 4096 (0 - 4095). At each a pixel a
number is stored to represent or color or gray level. In most imaging programs on the PC, the ‘0’
value represents black, the highest value represents white and the intermediate values are shades of
gray.

As illustrated in the above figure, our work with images can be thought of as being image
processing if we start with an image and if the result is an image. The image is processed or
enhanced; perhaps sharpened, or the contrast is modified. Image analysis would involve any
operation in which we start with an image and the results of the operation are numbers or data.

A portion of a digital image is shown below. This is an 8 bit grayscale image which consists of 32
pixels per line and 32 lines of data. As with all digital images, the image can be zoomed to the point
where the pixels become obvious. The four images at the below left show the same number of
pixels shown with a different zoom factor.

It is possible to view the actual numerical data for the image. A profile of pixel values is shown
below for a horizontal line across the middle of the eye.

52 36 26 25 71 114 126 128 149 176 182 90 27 45 51 62 50 68 48 40 39 29 99 126 68 36 35 30 39


45 57 77

The brightest part of the eye has a gray level of 182 and the darkest part of the eye has a gray level
of 26. An image processing operation could be used to stretch or increase the contrast. If this were
done, the range of gray levels could be increased to as much 0 to 255 as shown at the bottom of the
preceding page at the right.

The previous image is an 8 bit, grayscale image. Other types of images can be only 1 bit
(sometimes called a bitmap) which consists of black or white pixels, 256 color palette images (8 bits
but it can contain a variable array of colors), 24 bit color images (sometimes called RGB or true-
color images). The 24 bit color images contain 8 bits (256 levels) each of red, green and blue.

Image files tend to be very large files. An image that is 1024x1024x8 bits is 1 megabyte in size, and
512x512x8 bits would be 250 kilobytes. If an image is stored as an RGB color image, it will take 3
times as much disk space. The EDAX digital image file sizes are 3.2 megabytes (2048x1600x8
bits), 800 kilobytes (1024x800x8 bits), 200 kilobytes (512x400x8 bits) and decrease by a factor of 4
for each smaller image size.

EDAX Phoenix Procedures – PhotoImpact

Introduction

PhotoImpact is an image processing program that provides many additional features not
available in the Phoenix imaging software. This program is comparable in many ways to Adobe
Photoshop and has a manual, training software, as well as on-line help to describe many detailed
procedures that can be implemented using this software. In x-ray microanalysis and electron
microscopy, there is really only a small subset of its capabilities that most of us will ever need. It
is intended that the following few pages will describe those procedures to enable the user to use
this program without having to become an expert on image enhancement software.

PhotoImpact actually consists of several programs. Four of the programs that will be mentioned
at least briefly here are:
(1) PhotoImpact, a basic image enhancement program that is most similar to Photoshop;
(2) PhotoImpact Album, a program that allows the user to create thumbnail images, to print
multiple images on a single page and to create a slide show of images;
(3) PhotoImpact Capture, a screen capture program which is a fairly simple program that will not
be described in detail here;
(4) PhotoImpact Viewer, a simple image viewing program which can be set up in the Explorer to
be the program opened when you double click on an image file –it is the program that the
Album program opens when you double click on a thumbnail image.

Most of the PhotoImpact programs will allow you to open multiple files. Once an image is
opened it is possible to zoom it by clicking on the “+” or “-“ keys. Also, most operations can be
undone (Ctrl-Z) and re-done (Ctrl-Y).

Procedures

In all procedures below the text shown in brackets represents a mouse click (e.g. “[OK]” would
indicate a click on the “OK” button). Tabs are used to show the hierarchy of commands (i.e. a
menu bar selection is shown to the left, with one additional tab to indicate a click from the pull-
down menu, and an additional tab to show a click from the resulting dialog box, etc.).
Most steps in image enhancement can be undone if it is decided that the result was not optimal.
This is accomplished by clicking on “Edit” and “Undo”. The shortcut to undo an operation is
Ctrl-Z and Ctrl-Y will re-do the same operation. There are multiple levels of undo for an image
and the number of undo’s can be specified by clicking on File and Preferences. The higher the
number of undo’s that you specify will be more memory intensive. It is possible, or a good idea
to save a modified image with a new name to protect yourself from lost data when there are
lengthy image processing protocols.

To Open an Image

Procedure
-[File]
-[Open]

Select the correct drive and directory according to normal Windows conventions. Under File
types, it is possible to select “All Files”, “BMP”, “TIFF”, or 25 other image file formats. It is
possible to double click on a single file to view it in PhotoImpact, or to click on the first file of
interest and then to Ctrl-click on additional images and then clicking on “Open” to activate
multiple images. Similarly, clicking on the first file from a set of images, followed by a Shift-
click on the last image from the set will allow you to “Open” all images from that set.

Changing Brightness & Contrast

Setup. An image should be open and active in PhotoImpact. To open an image, see procedure
described above.

Procedure
-[Format]
-[Brightness & Contrast]

Click on the thumbnail with the most optimal brightness and contrast. Repeat if necessary, then
[OK]. An alternative is shown below:

-[Format]
-[Tone Map]

Click on the “Highlight Midtone Shadow” tab. Optimize the image with the sliders, then [OK].

To Change Image Mode

Images collected in the Phoenix imaging software will be “palette color” images (256 colors) if it
was an x-ray map or if it was an electron image. In order to print an image well or to perform an
image processing operation, it is often necessary to change an image/map to either a gray-scale
image or an RGB color image. With the image that you want to change open in PhotoImpact and
active, [Format], [Data Type] and then click on the desired file type (typically, “Grayscale” or
“True Color”. Note that the current image type will be gray-ed out and not selectable.

To Create a 3D Anaglyph Image from a Stereo Pair

Setup The 3D anaglyph image contains two images, one which is shown in red and the other is
either green or blue-green (cyan). When viewed with the corresponding color glasses, each eye
sees a different image where the differences in the image corresponds to a difference in tilt or
perspective, and our brain reconstructs a 3-dimensional image just like it normally does.

It is assumed that you have a stereo pair on disk that it has been recalled to PhotoImpact. The
two images should have the same pixel dimensions (e.g. 512x400, or 1024x800) and they should
differ only by their tilt. Also, the tilt axis should be vertical on the monitor with the SE detector
on the left (on many SEMs this will require a -90 degree scan rotate). If the conditions for the
two images are different than just described, you may need to depart from the procedure
described below (i.e. the color designation may need to be changed, and/or the merged image
may need to be rotated). In PhotoImpact the 3D image is created using a CMYK (cyan,
magenta, yellow, black) merge function. One image will be assigned to cyan, the other to both
magenta and yellow (this makes red) and a white image with no video information will need to
be created and assigned to the black channel.

It will be helpful when saving the two original images to denote the tilt of each image, or to
append a letter for the color of each image (e.g. an “r” for the red image and a “c” for the cyan
image).

Procedure

The two images should be converted to grayscale images; [Format], [Data Type] and
[Grayscale]. You may need to keep a note as to which image is the lower tilt image because the
conversion process will create a new image called “Untitled X” where “X” is a sequence number.
When both images have been converted to grayscale, then it will be necessary to create a white
image with the same definition (512x400 or 1024x800, for instance) as the two images that differ
in tilt. To create a new image that is completely white:

-[File]
-[New]

A dialog box will appear, and you should ensure that the data type is grayscale (8-bit), the image
size matches the resolution of the two images (512x400 or 1024x800, for instance), then [OK].
If the image is a white image, you are ready to proceed. If it is not white that is most likely
because the background color has been selected to be something different. In that case, click on
the background box under the grayscale palette at the right side of the screen; then enter 255 for
each of red, green and blue.

When the two images have been converted to grayscale and the white image is available:

-[Format]
-[Data Type]
-[Combine from CMYK]

A dialog box appears. In PhotoImpact (different from several other image enhancement
programs) you should assign the lower tilt image to cyan, the higher tilt image to both magenta
and yellow, and the white image to the black channel, then [OK].

Check the resultant image with your glasses (red for the left eye, cyan for the right eye). If the
image does not work, see note at the end of the second paragraph of this section (i.e. the color
designation may need to be changed, and/or the merged image may need to be rotated). To
determine if the image needs to be rotated, turn your head sideways to see if the 3-dimensionality
improves. If the image has what should be down as up (and vice-versa), then switching the
glasses around will help to clarify this situation; this indicates that the original color assignments
of the two images were incorrect.

The anaglyph image can be saved as an RGB or true color TIFF or BMP file. It can also be
saved as a JPG file. This is a compressed type of file that will take up less room on disk but can
show some degradation in image quality. For stereo color images, saving a JPG file with a
“Quality” setting of 75 to 80 (found from the “Save As…” dialog box under “Options…” if JPG
is the file type) will provide an image without significant degradation and a file that is typically
1/10 the size of the original file. A series of these 3D images can be put into an album file (see a
following section) and played as a slide show.

To Overlay Text on the Image

Procedure The image to be annotated should be displayed on the screen. The text tool should
be activated by clicking on it (the “T” button next to the bottom of the left-hand tool bar). The
bar above the image will show the font, its size, style, color, etc. By clicking on the active areas,
it is possible to change the selection. To place text on the image:
-With the text tool active, click on the image area where you would like the text
to appear.
-Type the text in the dialog box.
-You may click on “Update” to see the text on the image or [OK].
-If you would like to re-position the text, place the cursor on the text and drag
the text to another location.
-The font, size, color, etc. can be changed while the text is still active.
-To deactivate the text, you may push the space bar or [Edit], [Object], and
[Merge].

You are able to type additional text entries by repeating the steps above

To Overlay a Spectrum on the Image

Procedure The image to be annotated should be displayed on the screen. The spectrum to be
overlain should have been saved as a BMP file from the ZAF software and also recalled into
PhotoImpact. Because the spectrum is a solid color (assuming that the spectrum “Page Setup”
dialog box was not set to “Outline” mode) that is different from the spectrum background, it will
be possible to cut and paste it into another image using what is commonly called the ‘magic
wand’ tool. This tool is selected by first clicking on the third button from the top –it looks a little
like a magic wand with an irregular dashed line around it. The bar above the image area will
show some options for the magic wand tool and the most important may be the “similarity”
number. It can probably any small number (<10) for this application.

Place the cursor for the magic wand tool on top of the spectrum area of the BMP file. A marquee
should be seen which encloses the spectrum and shows that the spectrum area is active. Next,
the spectrum should be cut or copied to the clipboard and pasted into the image:
-[Edit]
-[Copy]

Activate the image that you want to paste the spectrum into (i.e. click on it, or click on
“Window” and select the image from the list at the bottom of the pull-down menu. Also, if you
are going to paste the spectrum as a color image, then you should convert the gray-scale image to
a true color image first; [Format], [Data Type], [True Color]. To copy the spectrum:
-[Edit]
-[Paste]

Click and drag on the pasted object to position it where you would like it. When you are
satisfied with its position, you should deactivate the text by pushing the space bar or [Edit],
[Object], and [Merge].

At this point you might want to use the text feature in PhotoImpact to add text to label the peaks.
It would have been possible to copy the peak labels from the original spectrum file by using the
magic wand tool to click on each letter, but this is tedious and time consuming. Also, when the
spectrum is shown on the image, it has to stand out from the detail in the image. The file can be
saved with the overlay.

To Create Thumbnails of the Images and other Album Procedures

Procedure
-[File]
-[New]

A dialog box will appear that gives a choice of templates, but it will probably be best to use the
“General Purpose” template. Type a title (e.g. “Maps”), then [OK]. From the “Insert” dialog
box select the correct directory, then click, ctrl-click, shift-click or click and drag on the image
files of interest, the [Insert]. You can chose an additional directory and repeat the insertion
procedure from the previous sentence. To finish this session [Close].

You can add additional thumbnails to an existing album file (or group of thumbnail images) by
[Thumbnail] and [Insert]. The will bring up the same dialog box as in the previous paragraph.

Other Album Procedures. If you double click on any thumbnail image it will open up the
Viewer program and show you the image. The image can be zoomed by hitting the “+” or “-“
keys. The slide show program can be implemented by first selecting the thumbnails of interest
(or Ctrl-A to select all), then [View], [Slide Show], then select the appropriate options and [Play].
The slide show can repeat continuously, or just play once and terminate, or it can be terminated
with the escape key.
EDAX Particle/Phase for particle analysis

The EDAX particle/phase program is designed for the automatic chemical analysis and
measurement (in terms of their size, shape and orientation parameters) of particles. The program
can be used for a single field or for an automated run of many fields. Using a digital image
collected from the SEM, the particles of interest are thresholded and selected by their video level.
Particles are selected for analysis by their range of video levels as well as by their size range. A
spectrum is collected for each particle under conditions set by the user. An automatic procedure is
enabled to give size, shape and other parameters based on the number of adjoining pixels that are
within the selected video threshold and size range. The screen layout for particle/ phase is similar to
other EDAX software. The Particle/phase program has a screen display that consists of a menu bar,
a button bar, a control panel at the right side, an image area (upper left) and two areas of the screen
that can be alternated between different particle information including the spectrum, histogram and
search summary. The screen capture shown at left shows the image with selected particles, a
spectrum and search summary results.
Many of the commands can be executed by
clicking the mouse button on an icon in the
button bar or by making a selection from a
pull-down menu.

Prerequisites for particle measurement:


An image should be obtained on the SEM
at the desired magnification and with the
detector that provides the optimum contrast
to select the features or particles of interest
from the background image area. The
SEM should be placed in external control if
necessary and the particle/phase package
opened.

Image Setup: Access the image control panel by clicking on the “IMG” button or by choosing “Set
Up” and “Image Par”. If necessary enter the correct kV and magnification in the appropriate box by
double clicking in the box and typing the correct number and press enter. The appropriate matrix
should be selected (512x400, 1024x800, etc.) as well as the number of reads (1,2,4,8, etc.) Capture
the image by clicking in the e- button or on “collect” and “electron image”. If the contrast is not
optimum, adjust the image on the SEM and then re-collect the image.

Image Thresholding: The image histogram, as seen below can be selected by clicking on “view”
and “histogram” or by the Histogram button on the tool bar. Select the particles to be analyzed
using the two blue cursors. The left-most cursor should be placed (by clicking and dragging) on
that part of the histogram that provides the darkest representation of the features of interest. The
right-most blue cursor should be placed on the part of the histogram that represents the brightest
part of the features of interest. Click on save when the cursors are in the desired position. The
image should now be highlighted in green to show the features that have been selected. To set up a
second phase click the up arrow under “phase”. The blue cursors will now correspond to a second
phase that will be highlighted yellow. Up to 8 phases can be defined by repeating this process.
(Click on save after each phase is set.)
The next step is to
examine the size of the
particles with respect to the
micron marker shown at
the lower right of the
image. A reasonable size
“Min” and “Max” number
should be entered in the
boxes below the histogram,
followed by a click on the “Save” button. If the sample to be analyzed contains particles that are
touching or close together, use the threshold erosion option that is located on the “Analysis Set Up”
panel to better define grain boundaries.

Particle Search To obtain the particle data click on


“Analyze” and “Particle Search” or use the particle
button. The image area will again re-select as green, those
particles that are within the video level threshold and also
within the selected size range. If the smaller particles are not
selected, then the minimum size range should be lowered
followed by a “Save” and a new particle search.
Similarly, if larger particles are not selected, then the
maximum size range should be increased followed by a
“Save” and a new particle search conducted again. Also, by
default the particles touching the border of the image will not
be counted. If it is desirable to include these particles for
whatever reason, this can be done by clicking on “Setup”, “Analysis Par”, and by checking the box
next to “Include border particles”. To the right of the image is a data summary box that gives the
number of particles, their range and the area fraction (%) of the sample covered by the particles.
The search summary can be used to narrow the search parameters by any of the features analyzed in
the summary. The search shown in the first screen capture illustrates a search by diameter for the
entire range of particle diameters selected with the histogram. Another format for viewing search
results is to use “View” and “particle details” in the pull down menu. This view will show the
particle, its ID number and the results, as seen on the right. This box will be located in the upper
right hand side of the screen where the search summary had been. A spreadsheet with the particle
data is provided at the bottom of the display (located where the spectrum is shown above). Clicking
on a particle of interest will highlight the entry in the spreadsheet associated with the particle, and
clicking on a line in the spreadsheet will cause the corresponding particle to be highlighted in red.

Chemical Analysis The spectrum collection window looks similar to the ZAF window for spectrum
collection with a stopwatch and paint roller button for collection and clearing the spectrum. In spot
mode, collect a spectrum at different points that will be analyzed. When a representative list can be
generated for all elements that are present and the peaks have been identified, use the auto ROI
feature to select your regions of interest. A spectrum will be collected at each particle with the
parameters defined in the Analysis Setup Panel that can be accessed by clicking on “Set Up” and
“Analysis Par” or by using the “Anls” button.
Single Field: To begin the
single field analysis, click on
the analyze button or under
“Analyze”. There will be a
prompt that will ask for an
output directory where the data
will be saved. While the
analysis is in progress there will
be a zoom of each particle in
the upper right hand corner and
the spectrum will collect below. The data bar at the bottom will show the current activity (CPS,
dead time and set up). When the field is complete, the spreadsheet will show the particle
information as well as the chemical analysis.

Multiple Field: For multiple field runs, enter in stage coordinates for different locations under “Set
Up “ and “Stage Locations”. Next, select “set up” from the “Auto” menu. There will be a prompt
that follows to select the directory for out put data. Click OK to close the data box. To begin the
analysis, select “start” from the “Auto” menu. Use the “stop” under “Auto” to terminate the analysis
and use the pause button to momentarily pause and then resume the data collection. During the
analysis, the screen will be similar to that of a single field run. In the bottom right hand corner there
will be a message area (in blue) that gives the status of the analysis (moving the stage, updating
image etc.).

Data Display: To view the results click on a particle in the image and the spectrum will up date to
show the corresponding results or the appropriate row will be highlighted in the spreadsheet,
depending upon the screen view. Use the Screen View buttons to move from one view to another.
In spectrum view, the label above the spectrum refers to the particle and field number for multiple
field runs. The search summary or particle details will show in the upper right hand corner. Again,
move back and forth from these views under “view”. In particle detail view, the arrows in the upper
right hand corner to scroll through each particle in ascending or descending order. The image and
spectrum or spreadsheet will update as each particle is selected.

Data Archiving. During the set up of a particle analysis, there will be a prompt to set up a directory
into which it will store the information. The results will be saved as a search file that will contain
all of the data that is collected. When the search file is opened, the image, spectrum and particle
parameters will be displayed.

Printing the Results. The image can be printed directly from the particle/phase application in
different combinations of the screen display. The print function under “file” will print the image,
collection parameters and the spectrum when the spectrum is on the screen. When the table is in
view, it will print the image and collection parameters. To print the table, right mouse click on the
title bar of the spreadsheet and choose “print table”. In this menu it is also possible to select
particular columns to be displayed and printed. Click on select columns in the menu and all of the
column headings will appear as selected in the list. Clicking on a particular heading will deselect
that column and it will not be displayed or printed. Screen capture will give you the option of
combining the different data display views. The corresponding screen view will be printed for the
sections highlighted.
EDAX Applications Note
Spectrum Utilities

Introduction Previewing Spectra


Very few customers may be aware of an To preview a spectrum double click on its
extremely handy program that is part of name in the list.
your EDAX system. The Spectrum
Utilities program can typically be found on For viewing multiple spectra before printing
the C drive, in the Utilities folder of the begin with steps 1-3 of the printing
EDAX programs. Its executable file is procedures then do as follows:
abbreviated to SpecUtil.exe.
4. Turn off the printer option by clicking on
SpecUtil was created to allow the user to print more the printer icon (a red X should appear
than one spectrum on a page. In fact, SpecUtil is
formatted to fit 10 spectra on a single 8.5’ x 11’
over the icon).
page. This feature is very helpful when a large 5. Select a viewing speed by adjusting the
volume of spectra are being compiled for a report or scroll bar.
for archiving purposes. Programs that might 6. Click on the SCA>>Printer to begin the
benefit from this capability are particle analysis, preview.
GSR (gun shot residue), and multi-point analysis
packages.

Procedures for
Printing Multiple
Spectra
1. Select File:Open from the menu bar
and locate the file folder that contains
the spectra you would like to print.

2. Select any one of the spectra from the


folder and click on the OPEN button.
All the spectrum files in that folder will
appear listed in the left column of the
SpecUtil window.
Changing the Header
The page header can be personalized by
**Note This means all spectra meant to going to Help: Intro and entering a new
be printed on the same page must be label.
located in the same folder.

3. Hold down the control button and


Displaying Particle
select the spectra to be printed.
Images with Spectra
4. Click on SPC>>Printer in the menu When a CSV data file type is selected from
bar. File:Open Spectrum Utility has the added
ability to read in a Particle & Phase Analysis
datasets. Selecting a CSV dataset will
**Tip: The arrow buttons will manipulate display all the spectra from a single or multi-
Report Writingfield
withrun.
Microsoft Office
the view of the spectrum. These should
It is often desirable or a requirement to create reports that integrate text interpretation with
data (spectra, quantitative results, images, etc.). MS Word and MS Excel are readily available
word processing packages that can be used to prepare such a report.

The spectral data in any EDAX Figure 1 Entire spectrum saved as image file.
application is typically stored
with the file extension .SPC.
These files can be stored from, “Text boxes” can
and recalled to EDAX be directly added
applications only. When the to the graphic.
The text box color
goal is to store a spectrum and border can be
that can be inserted into a modified by
Word report, the spectrum double clicking
within them.
should be saved as a BMP or
TIF file.

** Note When saving the


spectrum as an image it will be
saved as it appears in the Figure 2 Low end of spectrum saved as image.
spectral window of the ZAF
program. Therefore, if there is
an area of interest the user
should click and drag that area
into view before saving. Figures
1-3 demonstrate this feature.

The spectrum, as an image file,


can then be inserted into the
Word document easily.
Drawing a “text box” in MS
Word and inserting the picture
into the box will determine its
size and location. The three
figures to the left were inserted
this way, as was this text on the Figure 3 Peaks of interest saved as image file.
right. The black outlines of the
text boxes have been left visible
to demonstrate this technique.
 To insert text into these
boxes simply click with the
cursor inside of the box and
start writing.

 To insert an image in a text


box position the cursor in the
box and select Insert:
Another common file format supported by EDAX applications is the CSV format (comma separated
values). Results files and summary tables are typically saved as CSV files. Spectra can also be
saved in this format. This is actually a text format that is most commonly used to input data into a
spreadsheet program from which it can be plotted or have additional calculations made. Portions of
that spreadsheet file can be highlighted and copied to the clipboard and pasted into the Word report
directly (see below) or into a text box.

Inserting Quantitative Results into a


Report Elemen Wt % At %
t
1. In MS Excel
 Click and drag to highlight the SiK 0.75 1.48
information to be copied. MoL 2.58 1.5
 Select Copy from the EDIT pull CrK 17.89 19.14
down menu. MnK 1.72 1.74
FeK 63.4 63.18
2. In MS Word NiK 13.67 12.96
 Draw a text box or position the Total 100 100
cursor where the results should
appear
 Select Paste from the EDIT pull Element Wt At %
down menu. %

Advanced Options SiK 0.75 1.48


MoL 2.58 1.5
 Highlight the columns of the table CrK 17.89 19.14
 Select Table Autoformat from the MnK 1.72 1.74
Table pull down menu FeK 63.4 63.18
 Choose a style NiK 13.67 12.96
 Click OK Total 100 100

Element Wt % At %

Stainless Steel Wt%


SiK 0.75 1.48
14% 1% 3%
18%
SiK MoL 2.58 1.5
MoL
2% CrK CrK 17.89 19.14
MnK
FeK
MnK 1.72 1.74
62%
NiK FeK 63.4 63.18
NiK 13.67 12.96
Graphs and charts created in MS Excel can also be Total 100 100
copy and pasted into an MS Word document.

The report can also include the BMP or TIF files from x-ray maps or electron images. Just as
with any BMP file, they can be inserted by clicking on “Insert”, “Picture” and the file names
provided as usual.
Fig. 1. BSE Image of an aluminum Fig. 2. Aluminum x-ray map.
alloy fracture surface.

After inserting the first image (Fig. 1), the cursor is shown at the lower right of the image. By
pressing the space bar a few times before doing the next insert, the second image will appear
to the right of the first but on the same line. After Figure 2 was inserted, the Enter key is
pressed two times to provide some space between the first and a possible second row of
images. This area can also be used for labeling the images.
If the x-ray maps had been collected at lesser resolutions, it would have been possible to place
more than two of them on a single line. Alternatively, it would also be possible to resize each
image by clicking on it and adjusting one of the corners. For multiple images, the percentage
of enlargement should be noted, so that it can be reproduced on each additional image of the
series.

a. BSE b. Phosphorus c. Silicon d. Titanium e. Carbon f. Oxygen

When creating a template document where the images may not have been collected yet, or are
being collected at the same time as the report is being generated, the text-box feature in MS
Word is handy. The steps for inserting a text-box are the same described on the previous
page. Inserting an image or pasting an image can be done right into the text-box. This is also
useful when handling large images and maps as a standard image insert would occupy a large
portion of the page. The images below have a 1024x800 resolution and have been reduced to
fit the size of the text-box.

The eDXAuto/Multi-Point Analysis software package provides integrated imaging with


automated analysis including line scan and multi-point analysis features. Some of the graphic
overlays on an image are shown in the four examples below. Each image is saved as a BMP
file when the image area appears as shown. The image is refreshed (to Fig. I below) by
clicking on “Image” and “Display Current”. Images are inserted in Word the same as
described previously. It is possible to overlay more than one line scan on a single image, but
the data are usually communicated best with a single line scan. There is a direct printout in
which all line scans are printed in sequence on a single page.

I. Sandstone image area (BSE). II. Image with 14 multi-point locations.

III. Silicon linescan along blue line. IV. Calcium linescan.

The results of a multi-point analysis are provided by eDXAuto as a summary table that can be
highlighted and copied to the clipboard. It is then pasted into Word where it can be highlighted and
a font size selected to fit the page. Important samples and analysis locations can be highlighted by
underlining them or by making them boldface or a larger font. In the example on the following
page, a table title and caption were added to the summary table.

Table 1. Results of multi-point analyses from mineral sample. The sample locations with high
iron concentrations are shown in boldfaced type and italics.
DxAuto Results Summary : C:\DX4\AUTO_P\USR\SSA101.SMY 05-27-1997 17:07:44
KV: 20.0 Tilt: 0.0 TKOff: 36.8
Wt%: C K O K NaK MgK AlK SiK P K K K CaK TiK FeK
Note
ssa101 3.57 54.51 0.15 0.00 0.24 40.43 0.63 0.11 0.10 0.07 0.20
ssa102 9.38 53.54 2.08 6.59 2.96 8.42 0.11 0.41 12.91 0.20 3.39
ssa103 19.32 46.58 0.24 7.96 0.51 1.39 0.36 0.74 19.29 0.14 3.47
ssa104 21.95 32.84 1.29 0.65 12.22 19.25 0.79 7.72 0.64 1.21 1.44
ssa105 8.68 53.46 0.33 5.33 1.70 2.27 0.30 0.78 15.78 0.07 11.31
ssa106 3.95 48.55 0.45 0.17 8.68 26.93 0.15 10.34 0.13 0.19 0.48
ssa107 8.87 53.82 0.37 5.83 0.17 0.23 0.14 0.23 19.73 0.16 10.46
ssa108 14.31 46.41 0.55 5.41 3.43 3.97 0.60 1.12 14.63 0.00 9.57
ssa109 9.98 54.75 0.38 8.55 0.49 0.68 0.21 0.24 18.45 0.18 6.09
ssa110 10.19 46.35 4.32 0.72 7.35 21.43 0.15 0.09 3.53 0.00 5.85
ssa111 4.36 48.05 0.38 0.18 8.69 26.91 0.18 10.39 0.16 0.16 0.55
ssa112 3.99 48.25 0.51 0.21 8.87 26.78 0.52 10.29 0.00 0.11 0.46
ssa113 4.07 47.94 0.43 0.18 8.77 27.08 0.53 10.48 0.16 0.06 0.30
ssa114 16.59 39.00 1.36 0.85 19.50 14.87 0.80 3.92 1.07 0.41 1.64
ssa115 4.16 48.21 0.19 0.18 8.67 27.05 0.45 10.58 0.26 0.05 0.19

The ability to interface with standard desktop publishing, spreadsheet, and image
enhancement software is primarily permitted by the use of three universal file formats (BMP,
TIF, and CSV). This ability is further enhanced with graphics overlays on the image and by
cutting and pasting to the Windows clipboard. Thus providing an easy working environment
for the user.
X-Ray Analysis Summary
Parameters
Typical parameters for good resolution (spectrum dominated by peaks with energies greater than 1
keV) with the fewest artifacts:
--kV > 2X highest energy peak
--Time Constant = 50 or 100 us
--Deadtime = 20 to 40 % (adjusted by changing count rate)
--Take-off angle = 25 - 40 degrees
--Working distance = intersection distance (for inclined detectors)

When the spectrum has significant peaks with energies less than 1 keV, the parameters should be the
same as above except that the count rate should be 500 to 1000 cps. When the sample is to be
mapped, the same parameters should be used except that a count rate of 10,000 to 100,000 cps (10
to 2.5 us time constant) should be used in order to improve the statistical quality of the map. This
assumes that the sample will tolerate the heat caused by the higher beam currents, that the larger
spot sizes do not degrade the resolution of the image or maps, and that the very low energy peaks
are not of primary interest which usually require lower count rates and a longer time constant.

Artifacts
Escape Peaks = keV of parent peak - 1.74 keV (Silicon Ka energy).
Sum Peaks = 2X energy of the dominant peak in a spectrum, or
the energy of peak A + peak B when two peaks are dominant.
Stray Radiation or “System Peaks” = Peaks derived from the pole piece, stage,
sample holder or detector window support. These are usually more
significant with horizontal EDS detectors.

Peak ID
Auto Peak ID will usually be adequate when the spectrum is dominated by K-series peaks. When
the spectrum contains several L- or M-series peaks, the best strategy may be to manually identify
the highest energy alpha peak and observe what other low-energy peaks are also associated with it.
Then continue with the next highest energy alpha peak and continue until all are identified.

Quantification
Whether the quantification involves the use of one or more standards, or if it is a standardless
analysis, the procedure is the same: the peak intensities are calculated apart from the background;
the peak intensities are compared to intensities of the pure elements to calculate the k-ratio; and
corrections are made for atomic number (Z), absorption (A) and fluorescence (F). When standards
are used, the pure element intensities are actually measured, but these can be calculated when no
standard is present. The ZAF corrections assume that the sample/detector geometry is well known,
that the sample is smooth, and that it is homogeneous. If these assumptions are accurate, then the
quantitative results will also be accurate.

Wavelength Dispersive X-ray Analysis


The use of x-ray spectrometry in electron microscopy has been a powerful market driver not only
for electron microscopes but also for x-ray spectrometers. More x-ray spectrometers are sold with
electron microscopes than in any other configuration. A general name for the combination is AEM,
or analytical electron microscope, though in modem times AEM can include other instrumentation
such as electron energy loss spectroscopy and visible light spectroscopy. A second type of x-ray
spectrometer measures the wavelength of the x-rays, and so is called ‘wavelength dispersive
spectrometry’ (WDS).

Wavelength spectrometers use crystals to diffract x-rays similar to the diffraction of visible light by
gratings. The regularly spaced array of atoms (or molecules) in the crystal diffract x-rays. Unlike
diffraction gratings, however, the crystal reflection only reflects one wavelength for each angle of
incidence. This is due to the difference between the two-dimensional diffraction grating and the
three dimensional diffraction of a crystal lattice. Diffraction from a three dimensional structure is
called Bragg diffraction. In Bragg diffraction the angle of reflection is equal to the angle of
incidence just as if the crystal were a mirror. Only one wavelength and its shorter wavelength
harmonics can be reflected for a given lattice spacing and angle of incidence. This means that most
crystal spectrometers must either be scanned or remain at one fixed wavelength position. The
exception to this statement is a small class of spectrometers that use a geometry that allows the x-
rays to simultaneously intercept the crystal at a range of angles, a different angle for each segment
of the crystal. These have very small collection solid angles and require a position sensitive detector
to record the spectrum. A crystal can only diffract x-rays that are shorter in wavelength than twice
the distance between the atoms (or molecules). This means that there is no one crystal that will work
for the whole x-ray range. Short wavelength x-rays can use crystals of typical 1 to 5 Angstrom
spacings. X-rays from the lighter elements need crystals with wider spacings, which are typically
organic crystals with large molecules. These crystals exist at spacings shorter than about 13
angstroms and cannot be used for elements lighter than fluorine.

For the lightest elements artificial crystals are used, either Langmuir-Blodget films or, more
recently, sputtered multilayers. Langmuir-Blodget films are made from soaps that contain heavy
metals. The soap molecules can be floated on water with their hydrophilic ends in the water and
their hydrophobic ends up, giving a uniform layer one molecular layer thick. When a substrate is
repeatedly dipped into the water the soap layer is picked up onto the surface one layer at a time,
making an artificial crystal with the molecules aligned within each layer. This layer stack can then
be used to diffract x-rays. Sputtered multilayers are made by physical vapor deposition, with
alternating layers of heavy and light elements. These multilayers can be made at custom spacings
from about 15 angstroms to 200 angstroms or more. They have higher reflectivities than the soap
films and are more stable.

The first x-ray spectrometer-electron microscope combinations were special purpose WDS
instruments. The microscope was specifically modified to have a spectrometer bigger than the
column built onto the side. They were useful as prototypes of analytical microscopes and were used
to develop AEM techniques, but it was not until EDS was developed that analytical electron
microscopes became popular.
Today there are many applications of WDS, and several companies make WDS spectrometers for
use with both TEMs and SEMs. The advantages of WDS are higher resolution and lower
background. Typical resolution of a WDS spectrometer is 20eV (at Mn Ka), whereas the best EDS
spectrometers have 130 eV. At Na Ka the WDS can give as low as 3 eV FWHM versus 80 eV for
EDS. These advantages give WDS the edge in measurements where minimum detectable limits are
important. They also make possible measurements such as potassium in a high background of
calcium, which is important in some areas of medicine and biology. The disadvantage of WDS is
low throughput. WDS spectrometers collect a relatively small solid angle and must be slowly
scanned to obtain a spectrum. A recent commercial development is the integration of both WDS and
EDS in one unit with the hardware and software to collect, analyze and display their spectra
simultaneously.

EDS
Spectrum

WDS
Spectrum

WDS spectrometers use an angular scan to generate a spectrum. The crystal moves and rotates and
the detector moves along at twice the speed to keep up with the mirror reflection, giving what is
called a theta-two-theta scan. This takes a lot of space and precision mechanics. Since collimating
optics are not available in the x-ray region, the crystals are efficient only over a small solid angle
and collection efficiency is low. Detectors are typically flow proportional counters, which have a
high count rate capability and sensitivity to even the softest x-rays.

X-ray spectrometry is a good example for pondering nature’s constraints. EDS methods do not
require scanning, and so can detect an entire spectrum at once. On the other hand an EDS
spectrometer can only detect one x-ray photon at a time and does not care whether the x-ray is in
your range of interest or not. This could cause your maximum count rate to be exceeded by spectra
outside your range of interest. WDS spectrometers can withstand very high background or signal
count rates, but can measure only one wavelength at a time. Simultaneous WDS spectrometers
always give up either collection area, or spectral range in order to give a simultaneous spectrum.
These constraints are not easily resolved and so the best spectrometer
depends on the exact application. This leaves room for both EDS and WDS in modern analytical
electron microscopes.

WDS Crystal Spectrometer on a SEM Column


Photography
Basic Principle

The process of photography is basically a series of chemical reactions. A specific class of


compounds known as silver halide salts are light sensitive. Usually these salts consist of
silver bromide (although iodide and chloride are sometimes used). When these salt grains
are struck by a given number of photons, the energy of the photons is imparted to them and
they undergo a change to their activated state. In this activated state, these particular silver
halide grains can undergo a chemical process known as development to become black silver
grains. The unexposed silver grains are dispersed through a gel matrix known as an
emulsion. This emulsion is supported by either a clear backing (acetate or glass plates) or on
paper.

The activated silver halide grains are developed to black silver particles by a reducing agent,
the developer. Like all reducing agents, developer is basic having a pH higher than seven.
Because developer will eventually reduce even those grains which are not in a highly
activated state or which have received very few photons, the development process must be
stopped. This is accomplished by either using a stop bath which is usually a mild acid
solution or by putting in running water which has a low enough pH to stop the development.
This step is known as the stop process.

The remaining silver grains still have the potential of undergoing reduction and becoming
visible as black grains even after the stop step. To prevent light from later developing these
grains and causing the image to darken with time, these undeveloped grains must be
removed in a process known as fixation. Photographic fixatives are usually thiosulfate salts.
These have the ability to remove from the emulsion the unactivated silver halide grains that
do not come out in the developing or stopping steps.

Thus the photographic process is a series of 1) light activation, 2) development, and 3)


fixation.

The two primary factors in choosing a photographic emulsion are light sensitivity and grain
size. The term grain size literally refers to the size of the exposed and developed silver
particles in the emulsion. These can range from 0.2 um to 20 um in size with “fine grain”
high-resolution films being at the smaller end of the spectrum. Remember that 0.2 um is
equal to 200 nm and begins to approach the resolution limit of a light microscope! This is an
important feature of a film in that it allows a negative to be enlarged greatly before one
begins to see the actual grains. The distribution of these grains is also important with low
speed films having a uniform distribution of grains whereas high speed films tend to have a
wide distribution of different sized grains.

There are two types of emulsions that are distinguished by their sensitivity to different
energy sources. Panchromatic emulsions are sensitive to all wavelengths of light and for
this reason must be handled in total darkness until the fixation stage is complete.

Orthochromatic emulsions are sensitive to only certain wavelengths of light and can usually
be handled under a safelight. Polycontrast paper has a variety of different sized silver
grains in it emulsion. This allows the activation of specific sized grains depending upon
which colored filter (wavelength) is used. The size of these grains and their dispersion
changes the exposure curve for the paper and are what is responsible for making a print of
different contrasts.

Exposure

In order to activate the silver grains of an emulsion it must be exposed to an illumination


source. Exposure is defined as the darkening effect of light upon the silver halides of the
emulsion. It is the product of intensity of illumination (I) times the length of exposure in
seconds (T). E= I xT

This is the Reciprocity Law. Image density relates to the ability of the image to impede the
transmittance of light. However this relationship does not follow a straight line equation for films
and each film has a characteristic curve which reflects its reaction when exposed under a variety of
conditions. This characteristic curve has three portions the toe (underexposure), the straight line
portion (proper exposure) and the shoulder (over exposure). Each film and developer combination
produces its own unique curve. The slope of the straight line portion of the curve is known as
gamma. This is important, for gamma relates to the ultimate contrast found in the emulsion. A steep
curve will yield an emulsion with high contrast whereas a shallow curve will yield one with lower
contrast.
Micrographs as Data
As a microscopist your final data, the material that you will present to colleagues for peer
review, are images. As such they should be both scientifically informative and aesthetically
pleasing. Let’s take a look at how they can be both.

Micrographs as Data:

As with scientific writing, scientific micrographs need to be brief, informative, and well crafted.
With the exception of review articles, taxonomic treatises, and other similar publications, one tries
to use the fewest number of figures to communicate the data. Perhaps the best example of this
“brevity is everything” concept can be found on the pages of Science. Micrographs in this journal
are known for being very small and very few. Unlike other forms of data presentation (graphs,
tables, charts, line drawings, etc.) it is unusual for a single micrograph to contain a great deal of
information. In fact most micrographs contain information about only a single feature or in the case
of three-dimensional reconstruction, a single image may contain only a small portion of the
information that the author is trying to convey.

Most professional publications limit the authors to a certain number of plates or in some
cases, a certain number of printed pages. When one considers how much written material
can be presented on a page of text, the need for image brevity becomes apparent. Thus the
first rule of image publication is to use as few micrographs as possible to illustrate a given
point and if a single micrograph can be used to illustrate multiple points then it should be
given preference over others.

The second rule is to make the micrograph as small as is possible without losing the data.
More micrographs per plate translates to more data per page. This is why it is important to
not fill the entire image with the specimen when one is using large format negatives (TEM
and Polaroid). One can always safely enlarge a negative 2-3 times the original size but
image reduction is often very difficult to do. A good way to evaluate if an image is too
small is to photocopy it on a standard, poor quality photocopier. If the data within the
micrograph is lost, it is probably too small. Also be certain to check the “Instructions to
Authors” section for the particular journal that you intend to submit the manuscript. Some
will mention that image reduction is at the publisher’s discretion while others will insist that
the final plate size be of a specific dimension to avoid further reduction. It is a good idea to
assemble the final plate so that it will fit within the standard size of that particular journal
without further reduction and to specify this in your letter to the editor.

A third rule to bear in mind is that it is still VERY expensive to publish in color. If one can convey
the data in a black and white micrograph then this should be done, even if it requires the use of 2-3
separate micrographs to convey the data contained in a single color micrograph. This is NOT the
case with presentation images which will be discussed separately. Even when using techniques
such as 3-D confocal image a pair of stereo black and white micrographs, or
even a single 2-D volume projection, can often convey the essential information. Color
micrographs should be taken as well as black and white ones and for this reason many
fluorescent microscopes are equipped with two cameras, one loaded with color slide film
and one loaded with black and white print film.
Captions and Labels

The labels and captions that accompany your plates are almost as important as the
micrographs themselves. A well written figure legend should allow the reader to understand
the micrographs without having to refer back to (or even have read) the text of the
manuscript. The same is true of figure labels which when possible should be obvious to the
reader and the same as those used in the text. It is important to define the abbreviated labels
either in the body of the captions (once defined it should not be redefined) or as a “Key to
Figures” presented before Figure 1. Other types of labels (arrows, arrowheads, stars,
asterisks, etc.) should be defined each time they are used as one often needs an arrow to
illustrate different features in different micrographs. Labels come in a variety of styles and
sizes. It is important to use the same style throughout the manuscript. Black on white
lettering is the most versatile but pure black or pure white can also be used.

The final thing that should be included on each figure in a plate is the scale bar. Some
authors prefer to simply include “image magnification” as part of the figure legend but this
runs the risk of misinterpretation should the figure be enlarged or reduced from its original
size. A scale bar, incorporated into the micrograph, will remain useful regardless of how the
image is magnified as it will always stay proportional to the original. Scale bars have the
further advantage of brevity for if a similar magnification is displayed on a single plate of
figures one can simply state “Scale bar = ?? for all figures.”

Plate Construction

The actual assembly of the plate (group of related micrographs on a single page) is one of
the most difficult steps in publishing micrographs. A photocopier with enlarging and
reduction functions can be an extremely useful tool and can greatly aid your plate
production. It is always best to do a plate work-up using photocopied images as these are
cheap, easy to produce and modify, and can be cut and arranged to create the “lay out” of
the plate. Many journals require that all the figures be abutting whereas others allow you
separate the individual images with black or white tape.

Several methods of actually attaching the micrographs to the stiff board can be used. Rubber
cement can work but tends to wrinkle the micrographs and can be messy. Dry mount is a
heat sensitive adhesive that lays flat and is very permanent. A number of spray adhesives
come in a variety of permanence levels and are good for different purposes.

Micrographs as Art

While the first requirement of any micrograph is that it be scientifically informative, a


second requirement is that it be aesthetically pleasing. This means that the contrast, overall
brightness, neatness of labeling, and general flow of the individual micrographs that make
up a plate should all go together. A good photographer’s rule of thumb is that one takes 8-10
pictures for everyone published. The same ratio applies to scientific photography, only the
ratio may be quite a bit higher. Attention to detail goes a long way towards getting reviewers
and readers to take you seriously. Micrographs with knife marks, poor fixation, sloppy
darkroom technique, etc. suggest that you are not serious about your science or your data. If
you are not serious why should your colleagues take you seriously. When deciding which
journal you should submit you micrographs too, consider how well that particular journal
reproduces black and white halftones. If you are not happy with the quality they give to the
work of authors, assume that you will not be happy with the way your micrographs are
reproduced. In today’s world there are too many journals and you should be able to choose
at least one that meets your high standards for micrographs.

Presentation Micrographs

Micrographs prepared for presentation are quite different from those prepared for a
manuscript. First of all color is not a major obstacle and in fact with today’s slide maker
software etc, people have come to expect color, even when one is dealing with SEMs and
TEMs where color has to be artificially added to the image.

When one is preparing images for a poster presentation size is as important as it was when
preparing a manuscript plate. In this case the images must be large enough to be comfortably
viewed from a distance of four to five feet. If you cannot read the text or see the data in the
micrograph from this distance then things are too small and you should work to enlarge it. With a
poster one can usually have a little more latitude with the number of figures used but bear in mind
that many poster sizes are quite restricted and you may be very limited in the figures that you
can use. When giving an oral presentation it is usually better to err on the side having too
many figures because the eye quickly gets bored when it has no text to read. If the audience
is only listening to your words then having multiple images, even if they all illustrate
essentially the same thing, it works to your advantage. My personal record is 115 figures in
a 15 minute talk but 30 to 40 is my average. Here is where aesthetics can really come into
play; be certain that when you see that “really gorgeous” shot that you take it, even if there
is nothing scientifically important about the image. You will someday be glad that you did.
Electron Beam Lithography
{from Cornell and SPIE Handbook of Microlithography,
Micromachining and Microfabrication}

Electron beam lithography (EBL) is a specialized technique for creating the extremely fine patterns (much
smaller than can be seen by the naked eye) required by the modern electronics industry for integrated
circuits. Derived from the early scanning electron microscopes, the technique in brief consists of scanning a
beam of electrons across a surface covered with a resist film sensitive to those electrons, thus depositing
energy in the desired pattern in the resist film. The process of forming the beam of electrons and scanning it
across a surface is very similar to what happens inside the everyday television or CRT display, but EBL
typically has three orders of magnitude better resolution. The main attributes of the technology are 1) it is
capable of very high resolution, almost to the atomic level; 2) it is a flexible technique that can work with a
variety of materials and an almost infinite number of patterns; 3) it is slow, being one or more orders of
magnitude slower than optical lithography; and 4) it is expensive and complicated - electron beam
lithography tools can cost many millions of dollars and require frequent service to stay properly maintained.

The first electron beam lithography machines, based on the scanning electron microscope (SEM), were
developed in the late 1960s. Shortly thereafter came the discovery that the common polymer PMMA
(polymethyl methacrylate) made an excellent electron beam resist [1]. It is remarkable that even today,
despite sweeping technological advances, extensive development of commercial EBL, and a myriad of
positive and negative tone resists, much work continues to be done with PMMA resist on converted SEMs.
Fig. 2.1 shows a block diagram of a typical electron beam lithography tool. The column is responsible for
forming and controlling the electron beam.

Underneath the column is a chamber containing a stage for moving the sample around and facilities for
loading and unloading it. Associated with the chamber is a vacuum system needed to maintain an
appropriate vacuum level throughout the machine and also during the load and unload cycles. A set of
control electronics supplies power and signals to the various parts of the machine. Finally, the system is
controlled by a computer, which may be anything from a personal computer to a mainframe. The computer
handles such diverse functions as setting up an exposure job, loading and unloading the sample, aligning
and focusing the electron beam, and sending pattern data to the pattern generator. The part of the computer
and electronics used to handle pattern data is sometimes referred to as the datapath. The following figure
shows a picture of a typical commercial EBL system including the column, chamber, and control electronics.
Optical {photo} lithography:
Optical lithography using lenses that reduce a mask image onto a target (much like an enlarger in
photography) is the technique used almost exclusively for all semiconductor integrated circuit manufacturing.
Currently, the minimum feature sizes that are printed in production are a few tenths of a micrometer. For
volume production, optical lithography is much cheaper than EBL, primarily because of the high throughput
of the optical tools. However, if just a few samples are being made, the mask cost (a few thousand dollars)
becomes excessive, and the use of EBL is justified. Today optical tools can print 0.25 um features in
development laboratories, and 0.18 um should be possible within a few years.

By using tricks, optical lithography can be extended to 0.1 um or even smaller. Some possible tricks include
overexposing/overdeveloping, phase shift and phase edge masks, and edge shadowing [9]. The problem
with these tricks is that they may not be capable of exposing arbitrary patterns, although they may be useful
for making isolated transistor gates or other simple sparse patterns. Another specialized optical technique
can be used to fabricate gratings with periods as small as 0.2 um by interfering two laser beams at the
surface of the sample [10]. Again, the pattern choice is very restricted, although imaginative use of blockout
and trim masks may allow for the fabrication of simple devices.

X-ray proximity printing may be a useful lithographic technique for sub-0.25 um features [11]. Again, it
requires a mask made by EBL, and since the mask is 1 this can be a formidable challenge. However, if the
throughput required exceeds the limited capabilities of EBL, this may be an attractive option. The
disadvantage is that x-ray lithography is currently an extremely expensive proposition and the availability of
good masks is limited. It also requires either a custom built x-ray source and stepper or access to a
synchrotron storage ring to do the exposures. With care, x-ray lithography can also be extended to the sub-
0.1 um regime [12].

The final technique to be discussed is ion beam lithography. The resolution, throughput, cost, and complexity
of ion beam systems is on par with EBL. There are a couple of disadvantages, namely, limits on the
thickness of resist that can be exposed and possible damage to the sample from ion bombardment. One
advantage of ion beam lithography is the lack of a proximity effect, which causes problems with linewidth
control in EBL. Another advantage is the possibility of in situ doping if the proper ion species are available
and in situ material removal by ion beam assisted etching. The main reason that ion beam lithography is not
currently widely practiced is simply that the tools have not reached the same advanced stage of development
as those of EBL.

Pattern Writing:
The net result of the electron scattering is that the dose delivered by the electron beam tool is not confined to
the shapes that the tool writes, resulting in pattern specific linewidth variations known as the proximity effect.
For example, a narrow line between two large exposed areas may receive so many scattered electrons that it
can actually develop away (in positive resist) while a small isolated feature may lose so much of its dose due
to scattering that it develops incompletely. Fig. 2.13 shows an example of what happens to a test pattern
when proximity effects are not corrected. [30]
Proximity Effect Avoidance
Many different schemes have been devised to minimize the proximity effect. If a pattern has fairly uniform
density and linewidth, all that may be required is to adjust the overall dose until the patterns come out the
proper size. This method typically works well for isolated transistor gate structures. Using higher contrast
resists can help minimize the linewidth variations. Multilevel resists, in which a thin top layer is sensitive to
electrons and the pattern developed in it is transferred by dry etching into a thicker underlying layer, reduce
the forward scattering effect, at the cost of an increase in process complexity.

Higher beam voltages, from 50 kV to 100 kV or more, also minimize forward scattering, although in some
cases this can increase the backscattering. When writing on very thin membranes such as used for x-ray
masks, higher voltages reduce the backscatter contribution as well since the majority of electrons pass
completely through the membrane. [31]
Conversely, by going to very low beam energies, where the electron range is smaller than the minimum
feature size, the proximity effect can be eliminated. [32] The penalty is that the thickness of a single layer
resist must also be less than the minimum feature size so that the electrons can expose the entire film
thickness. The electron-optical design is much harder for low voltage systems since the electrons are more
difficult to focus into a small spot and are more sensitive to stray electrostatic and magnetic fields. However,
this is the current approach in optical maskmaking, where a 10 kV beam is used to expose 0.3 um thick
resist with 1 um minimum features on a 5 mask. In more advanced studies, a 1.5 kV beam has been used to
expose 70 nm thick resist with 0.15 um minimum features. [33] A technique that can be used in conjunction
with this approach in order to increase the usable range of electron energy is to place a layer with a high
atomic number, such as tungsten, underneath the resist. This has the effect of further limiting the range of
the backscattered electrons.

SEM micrograph of a positive resist pattern on silicon exposed with a 20 kV electron beam demonstrates the
proximity effect, where small isolated exposed areas receive less dose relative to larger or more densely
exposed areas. [From Kratschmer, [30] 1981]

Proximity Effect Correction


Dose modulation
The most common technique of proximity correction is dose modulation, where each individual shape in the
pattern is assigned a dose such that (in theory) the shape prints at its correct size. The calculations needed
to solve the shape-to-shape interactions are computationally very time consuming. Although the actual effect
of electron scattering is to increase the dose received by large areas, for practical reasons proximity
correction is normally thought of in terms of the large areas receiving a base dose of unity, with the smaller
and/or isolated features receiving a larger dose to compensate.

Several different algorithms have been used. In the self-consistent technique, the effect of each shape on all
other shapes within the scattering range of the electrons is calculated. The solution can be found by solving a
large number of simultaneous equations; [34] unfortunately, this approach becomes unwieldy as the number
of shapes increases and their size decreases. An alternative is to define a grid and compute the interaction of
the pattern shapes with the grid and vice versa; [35] however, the accuracy and flexibility of this technique
may be limited. An optimal solution may also be arrived at by an iterative approach. [36] Finally, neural
network techniques have been applied to the problem of proximity correction; [37] while not an attractive
technique when implemented on a digital computer, it might be advantageous if specialized neural network
processors become a commercial reality. Many of the algorithms in use assume that the energy distribution
has a double Gaussian distribution as discussed in Sec. 2.3.
Pattern biasing
A computationally similar approach to dose modulation is pattern biasing. [38-39] In this approach, the extra
dose that dense patterns receive is compensated for by slightly reducing their size. This technique has the
advantage that it can be implemented on EBL systems that are not capable of dose modulation. However,
the technique does not have the dynamic range that dose modulation has; patterns that contain both very
isolated features and very dense features will have reduced process latitude compared to when dose
modulation is used, since the isolated features will be under-dosed while the dense features will be
overdosed. Pattern biasing cannot be applied to features with dimensions close to the scale of the pixel
spacing of the e-beam system.
GHOST
A third technique for proximity correction, GHOST,[40] has the advantage of not requiring any computation at
all. The inverse tone of the pattern is written with a defocused beam designed to mimic the shape of the
backscatter distribution (Fig. 2.14). The dose of the GHOST pattern, ee / (1 + ee), is also set to match the
large area backscatter dose. After the defocussed inverse image is written, the pattern will have a roughly
uniform background dose. GHOST is perhaps an underutilized technique; under ideal conditions it can give
superb linewidth control. [41] Its disadvantages are the extra data preparation and writing time, a slight to
moderate loss of contrast in the resist image, and a slight loss in minimum resolution compared to dose
modulation due to the fact that GHOST does not properly correct for forward scattering.
Software
A number of companies for some time have had proprietary software for proximity correction. [25] [42-43]
Just recently, commercial proximity packages have become available, or are about to become available. [44-
45] At present, these are limited in their accuracy, speed, and data volume capability; while excellent for
correcting small research patterns, they may have difficulties with complex chips. Finally, several packages
have been developed at university and government laboratories, some of which might be available to an
adventurous user with excessive amounts of free time. [38] [46]

Schematic showing how the GHOST technique can be used to correct for the proximity effect. The top
curves show the energy distribution in the resist for a group of seven lines from the primary exposure
and from the GHOST exposure. The bottom curve is the resulting final energy distribution, showing the
dose equalization for all the lines.

Positive Resists
In the simplest positive resists, electron irradiation breaks polymer backbone bonds, leaving fragments of
lower molecular weight. A solvent developer selectively washes away the lower molecular weight fragments,
thus forming a positive tone pattern in the resist film.
PMMA
Polymethyl methacrylate (PMMA) was one of the first materials developed for e-beam lithography. [133-134]
It is the standard positive e-beam resist and remains one of the highest resolution resists available. PMMA is
usually purchased[135] in two high molecular weight forms (496 K or 950 K) in a casting solvent such as
chlorobenzene or anisole. PMMA is spun onto the substrate and baked at 170C to 200C for 1 to 2 hours.
Electron beam exposure breaks the polymer into fragments that are dissolved preferentially by a developer
such as MIBK. MIBK alone is too strong a developer and removes some of the unexposed resist. Therefore,
the developer is usually diluted by mixing in a weaker developer such as IPA. A mixture of 1 part MIBK to 3
parts IPA produces very high contrast [136] but low sensitivity. By making the developer stronger, say, 1:1
MIBK:IPA, the sensitivity is improved significantly with only a small loss of contrast.

The sensitivity of PMMA also scales roughly with electron acceleration voltage, with the critical dose at 50 kV
being roughly twice that of exposures at 25 kV. Fortunately, electron guns are proportionally brighter at higher
energies, providing twice the current in the same spot size at 50 kV. When using 50 kV electrons and 1:3
MIBK:IPA developer, the critical dose is around 350 uC/cm2. Most positive resists will show a bias of 20 to
150 nm (i.e. a hole in the resist will be larger than the electron beam size), depending on the resist type,
thickness, and contrast and development conditions and beam voltage.

When exposed to more than 10 times the optimal positive dose, PMMA will crosslink, forming a negative
resist. It is simple to see this effect after having exposed one spot for an extended time (for instance, when
focusing on a mark). The center of the spot will be crosslinked, leaving resist on the substrate, while the
surrounding area is exposed positively and is washed away. In its positive mode, PMMA has an intrinsic
resolution of less than 10 nm. [137] In negative mode, the resolution is at least 50 nm. By exposing PMMA
(or any resist) on a thin membrane, the exposure due to secondary electrons can be greatly reduced and the
process latitude thereby increased. PMMA has poor resistance to plasma etching, compared to novolac-
based photoresists. Nevertheless, it has been used successfully as a mask for the etching of silicon nitride
[138] and silicon dioxide, [139] with 1:1 etch selectivity. PMMA also makes a very effective mask for
chemically assisted ion beam etching of GaAs and AlGaAs. [140]
EXAMPLE PROCESS: PMMA POSITIVE EXPOSURE AND LIFTOFF
1. Start with 496K PMMA, 4% solids in chlorobenzene. Pour resist onto a Si wafer and spin at
2500 rpm for 40 to 60 seconds.

2. Bake in an oven or on a hotplate at 180 C for 1 h. Thickness after baking: 300 nm.

3. Expose in e-beam system at 50 kV, with doses between 300 and 500 uC/cm2. (Other
accelerating voltages may be used. The dose scales roughly with the voltage.)

4. Develop for 1 min in 1:3 MIBK:IPA. Rinse in IPA. Blow dry with nitrogen.

5. Optional descum in a barrel etcher: 150W, 0.6 Torr O2.


6. Mount in evaporator and pump down to 210-6 Torr.
7. Evaporate 10 nm Cr, then 100 nm Au.

8. Remove from evaporator, soak sample in methelyne chloride for ~10 min.

Agitate substrate and methylene chloride with an ultrasonic cleaner for ~1 min to complete the liftoff. Rinse in
IPA. Blow dry.

SEM and STEM Conversions


Any tool for microscopy - optical, electron, or scanning probe - may be adapted to work in reverse; that is, for writing
instead of reading. Converted electron microscopes suffer the same limitations as light microscopes used for
photolithography, namely, a small field of view and low throughput. Nevertheless, for a subset of research and R&D
applications, converted SEMs offer a relatively inexpensive solution.
Of the many custom designed SEM conversions, most use a single set of digital-to-analog converters (DACs), from 12
to 16 bits wide, to drive the scan coils of the microscope. The beam is modulated with an electrostatic or magnetic beam
blanker, which is usually located near a crossover of the beam. Alternatively, the beam can be blanked magnetically by
biasing the gun alignment coils or not blanked at all. In the later case, the beam must be "dumped" to unused sections of
the pattern. The figure below illustrates the "vector scan" method, in which shapes are filled with a raster pattern and the
beam jumps from one shape to the next via a direct vector. By taking over the scan coils and beam blanking, a SEM can
be used as a simple but high resolution lithography tool.

SEM conversions have evolved greatly in the past twenty years, primarily due to improvements in small computers and
commercially available DAC boards. Early designs used relatively slow computers that sent primitive shapes
(rectangles, trapezoids, and lines) to custom hardware. The custom pattern generator filled in the shapes by calculating
coordinates inside the shapes and feeding these numbers to the DACs. While this approach is still the best way to avoid
data transmission bottlenecks (and is used in commercial systems), inexpensive SEM conversions can now rely on the
CPU to generate the shape filling data. A typical configuration uses an Intel CPU based PC, with a DAC card plugged
into an ISA bus. In this case, the CPU can generate data much faster than it can be transmitted over an ISA bus.

The vector-scan writing strategy. (a) Patterns are split into "fields".
The stage moves from field to field, as shown by the arrows. Full
patterns are stitched together from multiple fields. (b) In many
vector-scan systems the fields are further tiled into subfields. A
major DAC (16 bits) deflects the beam (a small "Gaussian" spot)
to a subfield boundary, and a faster DAC (12 bits) deflects the
beam within a subfield. SEM conversion kits typically do not
include the faster 12-bit DAC. (c) The primitive shape is filled in
by rastering the spot. Between shapes the beam is turned off
("blanked") and is deflected in a direct vector to the next shape.
An alternative deflection strategy (not shown) is to use the major
DAC to deflect the beam to the origin of each primitive shape

The bus limits the deflection speed to around 100 kHz, that is, to a dwell time per point of 10 us.

What dwell time is required? With a 16-bit DAC and a SEM viewing field of 100 um, the size of a pixel (the smallest
logically addressable element of an exposure field) is 100 um/216=1.5 nm, and its area A is the square of this. The charge
delivered to this pixel in a time t is It, where I is the beam current. This must equal the dose times the pixel area. Given
a beam current I on the order of 50 pA and a required dose D around 200 uC/cm2 (typical for PMMA), we have a pixel
dwell time
t = DA / I = 910-8 s, (2.1)
or a deflection speed of 11 MHz. This being impossible with an ISA bus, we must either space out the exposure points,
apply a short strobe to the beam blanker, or use a combination of the two. When the exposure points are spaced every n
pixels (that is, when the 216 available exposure points are reduced by a factor of n) then the "pixel area" and thus the
dwell time is increased by a factor of n2. Note that the placement of features can still be specified to a precision of 216
within the writing field, while the shapes are filled in with a more coarse grid.

In the above example, we can set n to 11 so that the dwell time is increased to 1.110-5 s (91 kHz), increasing the pitch of
exposure points to 16.5 nm. This spacing is a good match to the resolution of PMMA, and allows fine lines to be
defined without any bumps due to pixelization. However, when we require 100 times the current (5000 pA in this
example), the exposure point spacing must be increased by a factor of 10, possibly leading to rough edges. Some pattern
generators avoid this problem by allowing different exposure point spacings in the X and Y (or in the r and theta)
directions, thereby allowing a larger exposure point spacing in the less critical dimension.

To use a SEM without a beam blanker, one must consider the large exposure point spacing required for common resists.
Lack of a beam blanker leads to the additional problem of artifacts from the settling of scan coils and exposure at beam
dump sites. Many SEM manufacturers offer factory-installed beam blankers. Retrofitted blankers are also sold by Raith
GmbH. [47]

The scan coils of a SEM are designed for imaging in a raster pattern and so are not commonly optimized for the random
placements of a vector scan pattern generator. Settling times are typically around 10 us for a JEOL 840 to as long as 1
ms for the Hitachi S800, where the bandwidth of the scan coils has been purposely limited to reduce noise in the
imaging system. Thus, it is important to consider the bandwidth of the deflection system when purchasing a SEM for
beamwriting.

The other major limitation of a SEM is its stage. Being designed for flexible imaging applications, SEM stages are not
flat, and even when equipped with stepper motor control are no more accurate than ~1 to 5 um. Periodic alignment
marks can be used to stitch fields accurately, but this requires extra processing as well as the use of photolithography for
printing alignment marks. The mark mask would presumably be fabricated on a commercial system with a laser-
controlled stage. Fortunately, alignment with a converted SEM can be quite accurate, especially when using Moiré
patterns for manual alignment. Automated alignment in the center of a SEM writing field is at least as good as in large
commercial systems. Alignment at the edges of a SEM field will be compromised by distortions, which are typically
much larger than in dedicated e-beam systems. Laser-controlled stages can be purchased for SEMs, but these are usually
beyond the budgets of small research groups.

Electron beam lithography requires a flat sample close to the objective lens, making secondary electron imaging
difficult with an ordinary Everhart-Thornley detector (a scintillator-photomultiplier in the chamber). A few high end
SEMs are equipped with a detector above the objective lens or can be equipped with a microchannel plate on the pole-
piece. These types of detectors are a great advantage for lithography since they allow the operator to decrease the
working distance, and thus the spot size, while keeping the sample flat and in focus.

With patterning speed limited by beam settling and bus speed, it is clear that inexpensive SEM conversions cannot
match the high speed writing of dedicated e-beam systems. However, a SEM based lithography system can provide
adequate results for a wide variety of applications, at a small fraction of the cost of a dedicated system. The number of
applications is limited by stitching, alignment, and automation. Practical applications include small numbers of quantum
devices (metal lines, junctions, SQUIDs, split gates), small numbers of transistors, small area gratings, small masks,
tests of resists, and direct deposition. The main limitations with SEM lithography are observed with writing over large
areas, or when deflection speed and throughput are critical. Specifically, difficulties with stitching and/or distortions due
to the electron optics of the microscope can become significant. SEMs are not practical for most mask making,
integration of many devices over many fields, large area gratings, multifield optical devices, or any application
requiring a large substrate.
LABS for 2013
Opt307/407

Lab #2

Sample Preparation for Conductors and Insulators

1. Cut pieces of paper and apply to a sample stub as directed


2. Sputter coat half of the paper with gold as directed
3. Review loading of samples in to the SEM chamber
4. Load both paper and circuit-on-ceramic sample
5. Image (using secondary electrons) each area of both samples at high, medium and low
accelerating voltages at about 10,000x.
6. Record micrographs and observations (esp. concerning the influence of the beam on the
coated and uncoated materials).

Questions to ponder:
Did the metal film help with charging and imaging?
Is the coating process conformal? Why? How do you know?
What kind of artifacts might be caused by sample preps of this type?
What is charging and how does it affect the images?
Are there other methods to reduce charging?
Consider interaction volume in your discussion of charging…how does it influence
mitigation techniques?
OPT307/407
Lab #3

Biological Specimen Preparation


In this lab you will prepare one piece of mold found on a orange rind. It will be dehydrated via a
critical point drying (CPD) method; another piece will be dehydrated using the HMDS method.
Before you come into the lab the pieces will be fixed using 2.5% gluteraldehyde solution. In
addition each piece will be water-exchanged with ethanol in a graded series (30%/50%/70%/95%/l
00%).
An ethanol-wet sample will be put into a CPD basket and dried using the procedure at the CPD
dryer in Wilmot 216. Record your observations.
An additional ethanol wet sample will have been placed into HMDS solution and air-dried
before you come into the lab. A simple air-dried section will be ready as well. These two
specimens are coated and ready to observe.

You will mount your CPD dried sample on a SEM stub and coat it with gold as learned last week.
View all three samples in the SEM and record your imaging results of structures characteristic of
the sample. Identify the parts and compare the preservation qualities between samples.

Questions to ponder:
Why must ethanol be used as an intermediate fluid in the dehydration processes?
Why does air-drying work after HMDS processing?
What would happen to surface details of the cheese if allowed to simply air-dry w/o dehydration?
Can you think of samples other than biologicals that would be dried effectively using one
of the two techniques in this lab?
What other transitional fluids would work for the CPD process? Why don't we use them in
the EM lab?
Opt307/407

Lab #4

Survey of Imaging Modes in the SEM

In this lab we will be observing the differences between some standard imaging modes the SEM.
To do this we will use a suite of prepared samples as follows:

Secondary Electron Imaging: Metallization on silicon (single ~1um Au line)


Backscattered Electron Imaging: Paint film cross-section (layered structure)
Geological thin section (different grain types)

Each of these samples will be viewed at both long and short working distances (20mm and 5mm
for SE imaging and 20mm and 10mm for BSE) and with both high and low accelerating voltages
(20KV and 5KV). You should end up with a series of images describing your observations. We
will also be exploring the use of the scan rotation, and signal mixing to make the images more
appealing.

Questions to ponder:
a. How do the different imaging modes affect the result of your investigation?
b. What would be your series of steps in attempting to get the “best” image of an
unknown material?
c. What would you do to mitigate the formation and/or visualization of a dark square
on your sample?
d. What effect does accelerating voltage have on artifacts in the images (like
charging or drift)?
e. Is signal processing the same as image processing for SEM images?
f. How does BSE imaging differ from SE imaging? Why? When would you prefer
BSE over SE?
Opt307/407

Lab #5

X-ray Microanalysis in the SEM

In this lab we will be looking at the x-ray emission process in the SEM. To become familiar with
the hardware and software we will look at the same suite of samples used in the Imaging Modes
lab (last week). Since you already know quite a bit about the samples it should be easy to
“verify” their compositions using the spectrometer as follows:

1. Choose microscope operational parameters so that x-rays are produced in an energy range
convenient for analysis.
2. Collect representative spectra at various accelerating voltages.
3. Perform elemental ID.
4. Look for peak overlap and mis-IDs.
5. Perform standardless quantitative data reduction on a polished sample.
6. Collect an elemental map using the Edax software on the rock sample or as directed.

Questions to ponder:
a. How do different SEM conditions affect the x-ray emission process and the data
obtained?
b. What does sample-detector geometry have to do with the data obtained?
c. What is the ZAF correction for?
d. What pitfalls and/or limitations are implied in SEM-EDS work?
Opt307/407

Lab #6
TEM and Electron Diffraction

Summary:

You will be instructed on how to:


Prepare a sample for TEM observation
Load samples into the TEM
Start the TEM
Image using bright field and dark field techniques
Record an electron micrograph
Use STEM imaging to look at gold nanoparticles
Use EDS detector in STEM mode

Sample used:

You will be given a TEM grid with a carbon support film that has been overcoated with a
thin layer of gold using the SEM lab sputtering system. You will also be given a silicon
window grid for HR-TEM imaging.

Procedures:

Bright field image (gold particles and silicon windows)


Dark field image
Diffraction image
STEM image

Offline: calculate the camera constant.


Opt307/407

Lab #7
Image Analysis with ImageJ Program

Summary:

You will be instructed on how to:


Download ImageJ to your computer
Install certain macros
Open an image file (obtained in previous labs)
Perform specified analyses

Sample used:

This lab will not require SEM or TEM time. You will process images already obtained,
and ones provided.

Procedures:

Perform a PSD on your images. This will require some setup, as explained in lecture and
during the lab. Count the image features in the biopsy sample as instructed (RBCs, open loop
areas, etc).
OPT307/407

Lab#8
Fabrication of Fresnel Lens using E-beam lithography
You will be provided:
A silicon and glass wafer with PMMA overcoat
A procedure for beam writing
A procedure for pattern development

You are to:


Put samples in SEM
Write the patterns
Develop the patterns (if necessary)
Replace the samples in SEM and image them

Keep sample in good shape for AFM imaging in next lab.

Questions to ponder:
What advantages does FIB writing have over EBeam Lithography? What disadvantages?
What limits the resolution of the writing techniques?
Why is small interline spacing a challenge?
Where is EBeam Lithography used routinely?
Where does ablated material go during FIB?
What needs to be balanced during FIB induced deposition of Pt?
Lab #9
Atomic Force Microscopy
Goal:

To learn how an AFM works and to get some “tapping mode” scans of the patterns made in the
Lithography lab.

Procedure:

1. Read handout on AFM technique.


2. Meet in the AFM alcove of the prep room
3. Non-contact mode tip will be pre-installed
4. Use the checklist to setup a scan of the patterns
5. Make 20um, 5um, and 1um scan lengths
6. Provide 2D and 3D resultant images
7. Provide a roughness measurement
8. Compare the images to the corresponding SEM images

Questions to ponder:

1. What are some limitations of the AFM technique?


2. How does it compare (advantages and disadvantages) to SEM?
3. Atomic resolution images are possible using AFM. What situations limit atomic
resolution results?
4. Many AFMs are deployed in otherwise EM labs…why do you suppose this is?

Potrebbero piacerti anche