Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Animation of the dispersion of light as it travels through a triangular prism Spectroscopy was originally the study of the interaction between radiation and matter as a function of wavelength (). In fact, historically, spectroscopy referred to the use of visible light dispersed according to its wavelength, e.g. by a prism. Later the concept was expanded greatly to comprise any measurement of a quantity as a function of either wavelength or frequency. Thus, it also can refer to a response to an alternating field or varying frequency (). A further extension of the scope of the definition added energy (E) as a variable, once the very close relationship E = h for photons was realized (h is the Planck constant). A plot of the response as a function of wavelengthor more commonly frequencyis referred to as a spectrum; see also spectral linewidth. Spectrometry is the spectroscopic technique used to assess the concentration or amount of a given chemical (atomic, molecular, or ionic) species. In this case, the instrument that performs such measurements is a spectrometer, spectrophotometer, or spectrograph. Spectroscopy/spectrometry is often used in physical and analytical chemistry for the identification of substances through the spectrum emitted from or absorbed by them. Spectroscopy/spectrometry is also heavily used in astronomy and remote sensing. Most large telescopes have spectrometers, which are used either to measure the chemical composition and physical properties of astronomical objects or to measure their velocities from the Doppler shift of their spectral lines.
Contents
[hide]
1 Classification of methods
1.1 Nature of excitation measured 1.2 Measurement process 2 Common types o 2.1 Absorption o 2.2 Fluorescence o 2.3 X-ray o 2.4 Flame o 2.5 Visible o 2.6 Ultraviolet o 2.7 Infrared o 2.8 Near Infrared (NIR) o 2.9 Raman o 2.10 Coherent anti-Stokes Raman spectroscopy (CARS) o 2.11 Nuclear magnetic resonance o 2.12 Photoemission o 2.13 Mssbauer 3 Other types 4 Background subtraction 5 Applications 6 See also 7 References
o o
8 External links
Electromagnetic spectroscopy involves interactions of matter with electromagnetic radiation, such as light. Electron spectroscopy involves interactions with electron beams. Auger spectroscopy involves inducing the Auger effect with an electron beam. In this case the measurement typically involves the kinetic energy of the electron as variable. Acoustic spectroscopy involves the frequency of sound. Dielectric spectroscopy involves the frequency of an external electrical field Mechanical spectroscopy involves the frequency of an external mechanical stress, e.g. a torsion applied to a piece of material.
Absorption spectroscopy uses the range of the electromagnetic spectra in which a substance absorbs. This includes atomic absorption spectroscopy and various molecular techniques, such as infrared, ultraviolet-visible and microwave spectroscopy. Emission spectroscopy uses the range of electromagnetic spectra in which a substance radiates (emits). The substance first must absorb energy. This energy can be from a variety of sources, which determines the name of the subsequent emission, like luminescence. Molecular luminescence techniques include spectrofluorimetry. Scattering spectroscopy measures the amount of light that a substance scatters at certain wavelengths, incident angles, and polarization angles. One of the most useful applications of light scattering spectroscopy is Raman spectroscopy.
[edit] Fluorescence
Spectrum of light from a fluorescent lamp showing prominent mercury peaks Main article: Fluorescence spectroscopy Fluorescence spectroscopy uses higher energy photons to excite a sample, which will then emit lower energy photons. This technique has become popular for its
biochemical and medical applications, and can be used for confocal microscopy, fluorescence resonance energy transfer, and fluorescence lifetime imaging.
[edit] X-ray
Main articles: X-ray spectroscopy and X-ray crystallography When X-rays of sufficient frequency (energy) interact with a substance, inner shell electrons in the atom are excited to outer empty orbitals, or they may be removed completely, ionizing the atom. The inner shell "hole" will then be filled by electrons from outer orbitals. The energy available in this de-excitation process is emitted as radiation (fluorescence) or will remove other less-bound electrons from the atom (Auger effect). The absorption or emission frequencies (energies) are characteristic of the specific atom. In addition, for a specific atom, small frequency (energy) variations that are characteristic of the chemical bonding occur. With a suitable apparatus, these characteristic X-ray frequencies or Auger electron energies can be measured. X-ray absorption and emission spectroscopy is used in chemistry and material sciences to determine elemental composition and chemical bonding. X-ray crystallography is a scattering process; crystalline materials scatter X-rays at well-defined angles. If the wavelength of the incident X-rays is known, this allows calculation of the distances between planes of atoms within the crystal. The intensities of the scattered X-rays give information about the atomic positions and allow the arrangement of the atoms within the crystal structure to be calculated. However, the X-ray light is then not dispersed according to its wavelength, which is set at a given value, and X-ray diffraction is thus not a spectroscopy.
[edit] Flame
Liquid solution samples are aspirated into a burner or nebulizer/burner combination, desolvated, atomized, and sometimes excited to a higher energy electronic state. The use of a flame during analysis requires fuel and oxidant, typically in the form of gases. Common fuel gases used are acetylene (ethyne) or hydrogen. Common oxidant gases used are oxygen, air, or nitrous oxide. These methods are often capable of analyzing metallic element analytes in the part per million, billion, or possibly lower concentration ranges. Light detectors are needed to detect light with the analysis information coming from the flame.
Atomic Emission Spectroscopy - This method uses flame excitation; atoms are excited from the heat of the flame to emit light. This method commonly uses a total consumption burner with a round burning outlet. A higher temperature flame than atomic absorption spectroscopy (AA) is typically used to produce excitation of analyte atoms. Since analyte atoms are excited by the heat of the flame, no special elemental lamps to shine into the flame are needed. A high resolution polychromator can be used to produce an emission intensity vs. wavelength spectrum over a range of wavelengths showing multiple element excitation lines, meaning multiple elements can be detected in one run. Alternatively, a monochromator can be set at one wavelength to concentrate on analysis of a single element at a certain emission line. Plasma
emission spectroscopy is a more modern version of this method. See Flame emission spectroscopy for more details. Atomic absorption spectroscopy (often called AA) - This method commonly uses a pre-burner nebulizer (or nebulizing chamber) to create a sample mist and a slot-shaped burner that gives a longer pathlength flame. The temperature of the flame is low enough that the flame itself does not excite sample atoms from their ground state. The nebulizer and flame are used to desolvate and atomize the sample, but the excitation of the analyte atoms is done by the use of lamps shining through the flame at various wavelengths for each type of analyte. In AA, the amount of light absorbed after going through the flame determines the amount of analyte in the sample. A graphite furnace for heating the sample to desolvate and atomize is commonly used for greater sensitivity. The graphite furnace method can also analyze some solid or slurry samples. Because of its good sensitivity and selectivity, it is still a commonly used method of analysis for certain trace elements in aqueous (and other liquid) samples. Atomic Fluorescence Spectroscopy - This method commonly uses a burner with a round burning outlet. The flame is used to solvate and atomize the sample, but a lamp shines light at a specific wavelength into the flame to excite the analyte atoms in the flame. The atoms of certain elements can then fluoresce emitting light in a different direction. The intensity of this fluorescing light is used for quantifying the amount of analyte element in the sample. A graphite furnace can also be used for atomic fluorescence spectroscopy. This method is not as commonly used as atomic absorption or plasma emission spectroscopy.
Plasma Emission Spectroscopy In some ways similar to flame atomic emission spectroscopy, it has largely replaced it.
A direct-current plasma (DCP) is created by an electrical discharge between two electrodes. A plasma support gas is necessary, and Ar is common. Samples can be deposited on one of the electrodes, or if conducting can make up one electrode.
Glow discharge-optical emission spectrometry (GD-OES) Inductively coupled plasma-atomic emission spectrometry (ICP-AES) Laser Induced Breakdown Spectroscopy (LIBS) (LIBS), also called Laserinduced plasma spectrometry (LIPS) Microwave-induced plasma (MIP)
Spark or arc (emission) spectroscopy - is used for the analysis of metallic elements in solid samples. For non-conductive materials, a sample is ground with graphite powder to make it conductive. In traditional arc spectroscopy methods, a sample of the solid was commonly ground up and destroyed during analysis. An electric arc or spark is passed through the sample, heating the sample to a high temperature to excite the atoms in it. The excited analyte atoms glow, emitting light at various wavelengths
that could be detected by common spectroscopic methods. Since the conditions producing the arc emission typically are not controlled quantitatively, the analysis for the elements is qualitative. Nowadays, the spark sources with controlled discharges under an argon atmosphere allow that this method can be considered eminently quantitative, and its use is widely expanded worldwide through production control laboratories of foundries and steel mills.
[edit] Visible
Many atoms emit or absorb visible light. In order to obtain a fine line spectrum, the atoms must be in a gas phase. This means that the substance has to be vaporised. The spectrum is studied in absorption or emission. Visible absorption spectroscopy is often combined with UV absorption spectroscopy in UV/Vis spectroscopy. Although this form may be uncommon as the human eye is a similar indicator, it still proves useful when distinguishing colours.
[edit] Ultraviolet
All atoms absorb in the Ultraviolet (UV) region because these photons are energetic enough to excite outer electrons. If the frequency is high enough, photoionization takes place. UV spectroscopy is also used in quantifying protein and DNA concentration as well as the ratio of protein to DNA concentration in a solution. Several amino acids usually found in protein, such as tryptophan, absorb light in the 280 nm range and DNA absorbs light in the 260 nm range. For this reason, the ratio of 260/280 nm absorbance is a good general indicator of the relative purity of a solution in terms of these two macromolecules. Reasonable estimates of protein or DNA concentration can also be made this way using Beer's law.
[edit] Infrared
Main article: Infrared spectroscopy Infrared spectroscopy offers the possibility to measure different types of inter atomic bond vibrations at different frequencies. Especially in organic chemistry the analysis of IR absorption spectra shows what type of bonds are present in the sample. It is also an important method for analysing polymers and constituents like fillers, pigments and plasticizers.
and chemical imaging/hyperspectral imaging of intact organisms,[2][3][4] plastics, textiles, insect detection, forensic lab application, crime detection and various military applications.
[edit] Raman
Main article: Raman spectroscopy Raman spectroscopy uses the inelastic scattering of light to analyse vibrational and rotational modes of molecules. The resulting 'fingerprints' are an aid to analysis.
[edit] Photoemission
Main article: Photoemission
[edit] Mssbauer
Transmission or conversion-electron (CEMS) modes of Mssbauer spectroscopy probe the properties of specific isotope nuclei in different atomic environments by analyzing the resonant absorption of characteristic energy gamma-rays known as the Mssbauer effect.
Ultraviolet-visible spectroscopy
From Wikipedia, the free encyclopedia
Beckman DU640 UV/Vis spectrophotometer. Ultraviolet-visible spectroscopy or ultraviolet-visible spectrophotometry (UV-Vis or UV/Vis) refers to absorption spectroscopy in the ultraviolet-visible spectral region. This means it uses light in the visible and adjacent (near-UV and near-infrared (NIR)) ranges. The absorption in the visible range directly affects the perceived color of the chemicals involved. In this region of the electromagnetic spectrum, molecules undergo electronic transitions. This technique is complementary to fluorescence spectroscopy, in that fluorescence deals with transitions from the excited state to the ground state, while absorption measures transitions from the ground state to the excited state.[1]
Contents
[hide]
1 Applications 2 Beer-Lambert law o 2.1 Practical considerations 2.1.1 Spectral bandwidth 2.1.2 Wavelength error 2.1.3 Stray light 2.1.4 Absorption flattening 3 Ultraviolet-visible spectrophotometer 4 See also 5 Notes 6 External links
[edit] Applications
An example of a UV/Vis readout UV/Vis spectroscopy is routinely used in the quantitative determination of solutions of transition metal ions and highly conjugated organic compounds.
Solutions of transition metal ions can be colored (i.e., absorb visible light) because d electrons within the metal atoms can be excited from one electronic state to another. The colour of metal ion solutions is strongly affected by the presence of other species, such as certain anions or ligands. For instance, the colour of a dilute solution of copper sulfate is a very light blue; adding ammonia intensifies the colour and changes the wavelength of maximum absorption (max). Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases. While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement.
The Beer-Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV/Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve. A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared
with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor. The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward-Fieser rules, for instance, are a set of empirical observations used to predict max, the wavelength of the most intense UV/Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV/Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present.
where A is the measured absorbance, I0 is the intensity of the incident light at a given wavelength, I is the transmitted intensity, L the pathlength through the sample, and c the concentration of the absorbing species. For each species and wavelength, is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of 1 / M * cm or often AU / M * cm. The absorbance and extinction are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm. The Beer-Lambert Law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for example).
of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring. Solutions that are not homogeneous can show deviations from the Beer-Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles.[3] The deviations will be most noticeable under conditions of low concentration and high absorbance. The reference describes a way to correct for this deviation.
A = log(%T / 100%)
The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a Tungsten filament (300-2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190-400 nm) or more recently, light emitting diodes (LED) and Xenon arc lamps[4] for the visible wavelengths. The detector is typically a photodiode or a CCD. Photodiodes are used with monochromators, which filter the light so that only light of a single wavelength reaches the detector. Diffraction gratings are used with CCDs, which collects light of different wavelengths on different pixels.
A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. Io must be measured by removing the sample. This was the earliest design, but is still in common use in both teaching and industrial labs. In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some doublebeam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken. Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, L, in the Beer-Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths.[5] A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. A standardized spectrum is formed by removing the concentration dependence and determining the extinction coefficient () as a function of wavelength.
Calibration curve
From Wikipedia, the free encyclopedia
This article includes a list of references or external links, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations where appropriate.
(October 2008)
A calibration curve plot showing limit of detection (LOD), limit of quantification (LOQ), dynamic range, and limit of linearity (LOL). In analytical chemistry, a calibration curve is a general method for determining the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration.[1] A calibration curve is one approach to the problem of instrument calibration; other approaches may mix the standard into the unknown, giving an internal standard. The calibration curve is a plot of how the instrumental response, the so-called analytical signal, changes with the concentration of the analyte (the substance to be measured). The operator prepares a series of standards across a range of concentrations near the expected concentration of analyte in the unknown. The concentrations of the standards must lie within the working range of the technique (instrumentation) they are using (see figure).[2] Analyzing each of these standards using the chosen technique will produce a series of measurements. For most analyses a plot of instrument response vs. analyte concentration will show a linear relationship. The operator can measure the response of the unknown and, using the calibration curve, can interpolate to find the concentration of analyte.
Contents
[hide]
1 How to create a calibration curve 2 Error in Calibration Curve Results 3 Applications 4 Notes 5 Bibliography
The data - the concentrations of the analyte and the instrument response for each standard - can be fit to a straight line, using linear regression analysis. This yields a model described by the equation y = mx + y0, where y is the instrument response, m represents the sensitivity, and y0 is a constant that describes the background. The analyte concentration (x) of unknown samples may be calculated from this equation. Many different variables can be used as the analytical signal. For instance, chromium (III) might be measured using a chemiluminescence method, in an instrument that contains a photomultiplier tube (PMT) as the detector. The detector converts the light produced by the sample into a voltage, which increases with intensity of light. The amount of light measured is the analytical signal. Most analytical techniques use a calibration curve. There are a number of advantages to this approach. First, the calibration curve provides a reliable way to calculate the uncertainty of the concentration calculated from the calibration curve (using the statistics of the least squares line fit to the data). [3] Second, the calibration curve provides data on an empirical relationship. The mechanism for the instrument's response to the analyte may be predicted or understood according to some theoretical model, but most such models have limited value for real samples. (Instrumental response is usually highly dependent on the condition of the analyte, solvents used and impurities it may contain; it could also be affected by external factors such as pressure and temperature.) Many theoretical relationships, such as fluorescence, require the determination of an instrumental constant anyway, by analysis of one or more reference standards; a calibration curve is a convenient extension of this approach. The calibration curve for a particular analyte in a particular (type of) sample provides the empirical relationship needed for those particular measurements. The chief disadvantages are that the standards require a supply of the analyte material, preferably of high purity and in known concentration. (Some analytes - e.g., particular proteins - are extremely difficult to obtain pure in sufficient quantity.)
[edit] Applications
Analysis of concentration Verifying the proper functioning of an analytical instrument or a sensor device such as an ion selective electrode Determining the basic effects of a control treatment (such as a dose-survival curve in clonogenic assay)
Molar absorptivity
From Wikipedia, the free encyclopedia
(Redirected from Molar extinction coefficients) Jump to: navigation, search The Molar absorption coefficient, molar extinction coefficient, or molar absorptivity, is a measurement of how strongly a chemical species absorbs light at a given wavelength. It is an intrinsic property of the species; the actual absorbance, A, of a sample is dependent on the pathlength l and the concentration c of the species via the Beer-Lambert law, A = cl. The SI units for are m2/mol, but in practice, they are usually taken as M-1 cm-1 or L mol-1 cm-1. In older literature, cm2/mole is used. These units look very different, but it is just a matter of expressing volume in cm3 or mL. Different disciplines have different conventions as to whether absorbance is Naperian or decadic, i.e. defined with respect to the transmission via natural or common logarithm. The molar absorption coefficient is usually decadic,[1] but when ambiguity exists it is best to qualify it as such. The molar extinction coefficient should not be confused with the different definition of "extinction coefficient" used more commonly in physics, namely the imaginary
part of the complex index of refraction (which is unitless). In fact, they have a straightforward but nontrivial relationship; see Mathematical descriptions of opacity. In biochemistry, the extinction coefficient of a protein at 280 nm depends almost exclusively on the number of aromatic residues, particularly tryptophan, and can be predicted from the sequence of amino acids.[2] If the extinction coefficient is known, it can be used to determine the concentration of a protein in solution. When there is more than one absorbing species in a solution, the overall absorbance is the sum of the absorbances for each individual species (X, Y etc.): , The composition of a mixture of N components can be found by measuring the absorbance at N wavelengths (the values of for each compound at these wavelengths must also be known). The wavelengths chosen are usually the wavelengths of maximum absorption (absorbance maxima) for the individual components. None of the wavelengths must be an isosbestic point for a pair of species. For N components with concentrations ci and wavelengths i, absorbances A(i) are obtained:
. This set of simultaneous equations can be solved to find concentrations of each absorbing species. The molar extinction coefficient (if expressed in units of L mol-1 cm-1) is directly related to the Absorption cross section (in units of cm2) via the Avogadro constant[3]:
. The molar absorptivity is also closely related to the mass attenuation coefficient, by the equation (Mass attenuation coefficient)(Molar mass) = (Molar absorptivity).
Atomic absorption spectrometer In analytical chemistry, atomic absorption spectroscopy is a technique used to determine the concentration of a specific metal element in a sample.[1] The technique can be used to analyze the concentration of over 70 different metals in a solution. Although atomic absorption spectroscopy dates to the nineteenth century, the modern form was largely developed during the 1950s by a team of Australian chemists. They were led by Alan Walsh and worked at the CSIRO (Commonwealth Science and Industry Research Organisation) Division of Chemical Physics in Melbourne, Australia.[2]
Contents
[hide]
1 Principles 2 Instrumentation o 2.1 Types of atomizer 2.1.1 Analysis of liquids o 2.2 Radiation sources 2.2.1 Hollow cathode lamps 2.2.2 Diode lasers 3 Background correction methods 4 Modern developments 5 Alternatives 6 References 7 See also
[edit] Principles
The technique makes use of absorption spectrometry to assess the concentration of an analyte in a sample. It relies therefore heavily on Beer-Lambert law. In short, the electrons of the atoms in the atomizer can be promoted to higher orbitals for a short amount of time by absorbing a set quantity of energy (i.e. light of a given wavelength). This amount of energy (or wavelength) is specific to a particular
electron transition in a particular element, and in general, each wavelength corresponds to only one element. This gives the technique its elemental selectivity. As the quantity of energy (the power) put into the flame is known, and the quantity remaining at the other side (at the detector) can be measured, it is possible, from BeerLambert law, to calculate how many of these transitions took place, and thus get a signal that is proportional to the concentration of the element being measured.
[edit] Instrumentation
Atomic absorption spectrometer block diagram In order to analyze a sample for its atomic constituents, it has to be atomized. The sample should then be illuminated by light. The light transmitted is finally measured by a detector. In order to reduce the effect of emission from the atomizer (e.g. the black body radiation) or the environment, a spectrometer is normally used between the atomizer and the detector.
The radiation source chosen has a spectral width narrower than that of the atomic transitions.
Zeeman correction - A magnetic field is used to split the atomic line into two sidebands (see Zeeman effect). These sidebands are close enough to the original wavelength to still overlap with molecular bands, but are far enough not to overlap with the atomic bands. The absorption in the presence and absence of a magnetic field can be compared, the difference being the atomic absorption of interest. Smith-Hieftje correction (invented by Stanley B. Smith and Gary M. Hieftje) The hollow cathode lamp is pulsed with high current, causing a larger atom population and self-absorption during the pulses. This self-absorption causes a broadening of the line and a reduction of the line intensity at the original wavelength.[9] Deuterium lamp correction - In this case, a separate source (a deuterium lamp) with broad emission is used to measure the background emission. The use of a separate lamp makes this method the least accurate, but its relative simplicity (and the fact that it is the oldest of the three) makes it the most commonly used method.
Continuum Source AAS Recent modern developments in electronics and solid state detectors have taken the conventional AAS instrument to the next level. High Resolution Continuum Source AAS (HR-CS AAS) is now available in both flame and graphite furnace mode. Main features of the new instruments:
Single xenon arc lamp - Today, multiple hollow cathode lamps are no longer used. With the use of a single xenon arc lamp, all the elements can be measured from 185-900nm. This takes AAS into a true multi-element technique with the analysis of 10 elements per minute. CCD technology - For the first time in an AAS CCD chips are now available with 200 pixels which act as independent detectors. Simultaneous background correction - Background is now measured simultaneously compared to sequential background on conventional AAS. Multiple lines - Extra lines of an analyte are now available thus extending the dynamic working range. Better detection limits - Due to the high intensity of the Xenon Lamp there is better signal/noise ratio thus giving better detection limits. In some cases it is up to 10 times better than conventional AAS. Direct analysis of solids - In graphite furnace mode it is now possible to analyse solids directly thus avoiding long digestion times. Ability to measure sulfur and halogens - It is now possible to measure some non-metals by measuring molecular bands.
People love dos and donts lists. A quick Google search will yield 10.9 million hits for what to do and not do. A quick scan through the endless supply of D&D lists will show that many of the subjects people feel the need on which to provide unsolicited consulting really dont have a defined method of approach beyond common sense. For example, the dos and donts of air travel barely stretch outside the realm of common sense. Advice such as Do not place your firearm in your carry-on luggage or Do not smoke while in the aircraft goes without saying. Then there are the do and do-not-do lists for topics that are highly subjective such as fashion (Dont wear white after Labor Day). Thankfully, in the realm of oil analysis and machinery lubrication, few dos and donts can be considered subjective. In this case, were talking about what to do and not do related to oil sampling for analysis. These simple rules will make or break the integrity of your sample, which is meant to drive your maintenance and reliability decisions.
However, the analysis of a sample greatly depends on the quality of the sample itself. A high-quality sample translates into one that is rich with data and free from noise. The content of this article is nothing new. Dozens (if not hundreds) of articles, papers and books have had some advice for us to follow when extracting a sample of oil from a machine for analysis. However, as an industry, we dont seem to get it right. The same rules for oil sampling still apply, just like they always did. Here is the most recent do and do-not-do list for oil sampling from my perspective. 1) DO sample from running machines. DO NOT sample cold systems. This rule goes beyond simply starting the machine to take the sample. The ideology behind oil analysis is to capture a snapshot of the system at the time of sampling. The timing of the sampling should be when the system is under the greatest amount of stress. Typically, the best time to sample a system is when the system is under normal working load and normal conditions. This can be a tricky task when sampling from a system that continuously cycles during normal
production, such as the hydraulic system on an injection molding machine. Its under these conditions that well capture a sample that best represents the machine conditions most likely to cause accelerated wear. 2) DO sample upstream of filters and downstream of machine components. Filters are designed to pull out wear debris and contaminants, so sampling downstream of these data-strippers provides no value. However, taking a sample before and after a filter for a simple particle count will allow you to see how well the filter is currently operating. Obviously, we expect the particle count before the filter to be higher than after the filter. If its not, its time to change the filter. Condition-based filter changes can be very important for sensitive systems and expensive filters. 3) DO create specific written procedures for each system sampled. DO NOT change sampling methods or locations. Everything we do in oil analysis and machinery lubrication should have a detailed procedure to back up the task. Each maintenance point in the plant should have specific and unique procedures detailing who, what, where, when and how. Oil sampling procedures are no different. We need to identify the sample location, the amount of flush volume, the frequency of sampling, the timing within a cycle to sample, and indicate what tools and accessories to use on that specific sample point based on lubricant type, pressure and amount of fluid required. 4) DO ensure that sampling valves and sampling devices are thoroughly flushed prior to taking the sample. DO NOT use dirty sampling equipment or reuse sample tubing. Cross-contamination has always been a problem in oil sampling. The truth of the matter is that flushing is an important task that is often overlooked. Failure to flush the sample location properly will produce a sample with a high degree of noise. Flushing prior to sampling needs to account for the amount of dead space between the sample valve and the active system multiplied by a factor of 10. If there is a run of pipe 12 inches long between the sample valve and the active system that holds one fluid ounce of oil, you need to flush a minimum of 10 fluid ounces before taking the sample for analysis. Flushing the dead space also will flush your other accessories such as your sample valve adapter and new tubing. 5) DO ensure that samples are taken at proper frequencies. DO NOT sample as time permits. Many of those responsible for taking oil samples rarely see the results of the analysis. One of the most powerful aspects of oil analysis is identifying a change in the baseline of a sample and understanding the rate at which the change has occurred. For example, a sample of new oil should have zero parts per million (ppm) of iron when tested as the baseline. As regular sampling and analysis continues, we may see the iron level increase. An increase of 10 or 12 ppm per sample may be considered critical; however, if the frequency is not consistent, what is considered normal becomes very subjective. If our frequency of sampling is 12 months, a rise in iron of 12 ppm isnt a major cause of concern. If our frequency is weekly, a rise in iron of 12 ppm is very concerning. Setting up the appropriate sampling frequency and adhering to it will allow for precise analysis and sound maintenance decisions.
6) DO forward samples immediately to the oil analysis lab after sampling. DO NOT wait more than 24 hours to send samples out. As mentioned earlier, oil sampling is much like taking a snapshot of your system at a point in time. The health of a lubricated system can change dramatically in a very short period of time. If a problem is detected in a system, the earlier it is detected, the less catastrophic potential it may have. Jumping on a problem early will not only allow you time to plan for a repair, but the repair will potentially be less significant.
Related Articles
It has been well discussed that oil analysis can effectively monitor three parameters of lubrication. First and foremost, many tests monitor the health of a lubricant. These tests are pretty straightforward, and the results are generally compared to the properties of the new lubricant to gauge how much the in-service lubricant has changed. It is common practice to set condemning limits or monitor the trends for significant changes. Contamination levels are also monitored using oil analysis, and cleanliness targets are usually used to trigger maintenance activities to bring the lubricants back to acceptable levels of contamination. A program based on proactive maintenance principles, which monitors and corrects the parameters mentioned above, will significantly reduce the likelihood of machine wear - the third parameter that can be effectively monitored by oil analysis. While it is certainly true that maintaining a healthy, clean lubricant will minimize machine wear, there are still many wear modes that can arise in spite of these efforts. Misalignment, imbalance, overloading, improper installation, fatigue the list goes on. Abnormal wear, for whatever reason, happens more often than maintenance professionals like to think. Therefore, it is essential to have a strategy in place to monitor machine wear. Oil analysis remains the best tool in the predictive maintenance toolbox for the early detection of wear problems. Wear metal and wear particle levels will begin to increase well before the machine exhibits symptoms, such as vibration, temperature or noise. It is difficult, however, to determine the correct wear metal level thresholds. This is particularly true in industrial applications where equipment categories traditionally used are so general. The following gearbox example reinforces this point. The question How much iron is too much in a gear box? sounds simple. However, when the many different sizes, types, loads, environments and applications are included in that question, it becomes more complex. If the many lubrication systems and lubricant types in use are added to this simple question, it becomes much more complicated. Is it realistic to think that there could be a good answer to such a question? Probably not. Yet in most cases, this is exactly the type of question that is being asked each time an oil sample is taken. If an oil analysis program is expected to detect machine wear problems effectively, better questions must be asked. What really needs to be determined is what is normal. Therefore, normal must be defined. According to Websters Dictionary, normal means conforming to a usual or typical pattern. How can a pattern in a broad category such as gearboxes be identified? The answer is fairly simple; by evaluating as much data as we possibly can. Before continuing, a review of how wear metals have traditionally been evaluated in the past is needed.
Fixed Limits
Many programs have used fixed limits, giving simple pass or fail criteria for each wear metal. Table 1 is an example of what fixed alarm limits might look like.
The drawback to this type of alarming technique is that it does not account for different contributing factors. Gearboxes come in many sizes and shapes. Some gearboxes are lightly loaded and run at constant speed, which would lend such a gearbox to a low wear rate. It might be in serious trouble if the iron level reaches 200 ppm. On the other side of the spectrum, the gearbox could be a low-speed, reversing, heavily loaded gearbox that hasnt had less than 500 ppm of iron in its oil since it was tested at the assembly plant. The lubrication method can have a large impact on wear metal levels as well. Many gearboxes are splash lubricated and hold only a small oil volume. As such, wear metals will build in the lubricant as time goes on. This situation would reveal a steadily increasing wear metal level and cause a false positive reading when the level broached the fixed alarm. Other gearboxes might be lubricated by a highly filtered circulating system, where wear particles are removed by filtration as rapidly as they are generated. In this case, the wear metal trend might be flat, and a significant change could occur without surpassing the fixed alarm. Such an exception would likely be missed by a fixed limit system.
Trend Analysis
Trend analysis allows the development of a pattern of behavior for a particular unit. If the sampling technique and interval are consistent, regular monitoring of the wear metal levels can effectively monitor for changes in the wear rate. This helps account for many of the variables within the equipment group. An uncharacteristic increase in iron, for example, would indicate a change in the wear rate. Many techniques can be applied to evaluating trend data, such as averages, standard deviations and linear regression. All are intended to identify a condition that is not normal in relation to the machines past behavior. What is missing here is identifying what is normal for that machine type. Is it normal for a gearbox like this to generate this level of iron?
Family Analysis
The answer to that question can be found using family analysis. This is a technique that compares the wear metal levels of groups of similar or identical equipment to identify a usual or typical pattern. It works like this: Equipment is grouped together by family. A family may consist of identical equipment located in many plants, such as GE Frame 7 gas turbines in many power plants across the country. Equipment might also be grouped based on load, size, lubricant type and operating parameters, such as a group of agitators at a chemical plant. The wear metal data is then evaluated as a whole. Next, the data for each machine is compared to the family wear rate. As an example, lets say that we have a family of 50 motor bearings at a steel mill. The average tin reading is 7 ppm with 90 percent of the bearings reading less than 10 ppm. It would then be safe to assume that it is normal for these bearings to have less than 10 ppm tin in their oil. If one of the bearings were found to have 35 ppm of tin, it would be safe to say that its wear rate is abnormal. An effort could then be initiated to determine the cause of the higher wear rate and correct the problem. The problem can be detected, identified and resolved before the damage occurs, saving a premature bearing failure and replacement costs. Family analysis techniques can have a significant impact on both large and small companies programs. A large company could use such a program to monitor large fleets of similar equipment among its plants, as well as to benchmark performance of individual plants. Companies with less equipment can compare their wear rates to equipment in many other plants, and take advantage of the labs vast database of equipment data.
Computers are now capable of using statistical calculations, database mining and a rulebased knowledge hierarchy to compare the test data to fixed limits, trend analysis and family analysis, and they can select the most appropriate evaluation for each application.
Wear Particle Analysis - Interpretation of Wear Metal Data for In-Service Oil Analysis Using the Spectroil Optical Spectrometer by Spectro Inc.
Topics Covered
Background Spectroil Optical Spectrometer for In-Service Oil Analysis Guidelines for Wear Metal and Contaminant Analysis Case Study - An Effective Spectrometric In-Service Oil Analysis Program for Wear Metal Data Interpretation Conditions for Wear Metal Analysis Optical Spectrometer for Analyzing Wear Metals and Contaminants in Oil Samples Examples of Wear Metals and Contaminants Detection with Spectroil Optical Spectrometer Summary
Background
Spectro designs, manufactures, sells and services analytical instrumentation. Their mainstays are optical emission spectrometers used in predictive maintenance of mechanical systems based on oil analysis. Used oil analysis is the base for a predictive maintenance program. Spectro customers are industrial or military organizations that have oil-lubricated machines or engines. A second market consists of instruments for the analysis of contaminants in gas turbine fuels. Through the study of lubrication, friction and wear they are able to predict possible future problems of systems.
The analysis of the in-service lubricant is then used to determine the condition of the system. The analytical data from an oil sample is reviewed and evaluated either manually by a data analyst, or in many instances, semiautomatically or automatically with specialized software such as Labtrak/Prescient. Wear trends may be normal requiring no corrective action, exhibit early signs of abnormal wear which may require more frequent sampling, or are abnormal resulting in a recommendation to take corrective maintenance action. The evaluation process is based on a knowledge of the metallurgy of the system being monitored and the fluids used for lubrication within the system.
Equipment operating conditions are a prime factor. The operating environment is also important. For example, a desert location usually causes an increase in silicon readings accompanied by a corresponding increase in wear. Time since last oil change and oil consumption will affect readings and possibly disguise a wear trend. The length of time the equipment is in service is extremely important. During the engine break-in period, either when new or after overhaul, wear metal concentrations are abnormally high but are usually no cause for alarm. If equipment is left to stand idle for long periods of time, rust can form and iron readings will increase. Older systems typically generate more wear metals than fairly new ones of the same model. Load on the engine is also a factor, particularly changes in load; increases in wear may be due to an additional load placed on the engine. The chemical composition of the oil and coolant are also important. Metals present may not be due to wear at all, but rather due to an oil additive or coolant leak
Case Study - An Effective Spectrometric In-Service Oil Analysis Program for Wear Metal Data Interpretation
An effective spectrometric oil analysis program is dependent upon interpretation of the analytical data on wear metals, contaminants and additives as measured by a spectrometer. The interpretation of analytical results is an evaluation of the maintenance status of an oilwetted system and consists of the laboratorys recommended service action.
Optical Spectrometer for Analyzing Wear Metals and Contaminants in Oil Samples
The spectrometer used to analyze an oil sample is capable to detect and quantify the wear metals and contaminates present in the oil sample. For example, if only iron and aluminum are present in abnormal amounts, the analyst's job is much simpler. The entire system does not have to be torn down and inspected; the inspection can be restricted to those components made up of iron and aluminum. Knowing the relative concentrations of the elements will further narrow down their possible source.
Examples of Wear Metals and Contaminants Detection with Spectroil Optical Spectrometer
1. An increase in silver and nickel in a certain type of railroad diesel is indicative of bearing wear. If detected early enough, a relatively simple bearing replacement can be made, rather than a $50,000-$100,000 overhaul and crankshaft replacement. 2. An increase in the amount of silicon in conjunction with a corresponding increase in iron, aluminum, and chromium as shown in the adjacent figure is probably caused by dirt ingestion. Air filter replacement and oil change may be the only maintenance action required. However, an increase of silicon alone may mean the oil type was changed to one containing a silicon-based anti-foaming agent and no maintenance action is required. The same trends without an increase of silicon could mean piston wear if the sample came from an internal combustion engine.
3. Sometimes even the slightest increase or presence of an element can be cause for alarm. The bearing shown in the adjacent figure was removed from the gearbox of an aircraft. The presence of only 2 ppm (parts-per-million) of copper was sufficient to warrant maintenance action. The source of the copper was the bronze bearing cage.
4. A trend showing the presence of boron in the lubricating oil of most water-cooled systems would indicate a coolant leak. If left unchecked, the coolant combines with combustion products and forms harmful acids and sludge which attack metal and reduce the ability of the oil to properly lubricate.
Summary
Wear metal analysis determines the condition of the machine, and not the condition of the lubricant. An effective in-service oil analysis program thus would include lubricant physical property analysis to determine lubricant degradation and contamination. The data from these additional tests can then be used to determine if a lubricant is still performing as specified or needs to be changed. This additional
capability makes oil analysis even more cost effective and popular in today's unpredictable oil market. The net benefits of a good in-service oil analysis program are:
reduce maintenance costs, increase equipment availability, reduce lubricant usage, and improve safety.
Drew Troyer, Noria Corporation Tags: wear debris analysis, oil analysis
Sometimes it is necessary to quickly determine if a machine is generating an unusual amount of wear debris. One way to accomplish this is to simply pull a patch and look at the particles with a simple top-light microscope. Wear particles tend to be shiny because they reflect light, especially freshly generated particles that have not had a chance to oxidize. Sometimes, however, one needs to separate the wear particles from the dirt particles to get a clearer view. Here is an easy on-site method for separating magnetic debris (e.g., iron and steel) that is quick and inexpensive. Once separated, the particles can be viewed under an inexpensive field microscope for evaluation. 1. Mix a measured amount of oil with kerosene (or other suitable solvent) about 50/50 in a flat- bottomed flask or beaker. Be sure the kerosene is dispensed through a filtered dispensing bottle. 2. Hold a disc magnet tightly to the flask bottom and slosh around the mixture for three minutes. 3. Without removing the magnet, decant the liquid and non-magnetic debris out of the flask through a membrane (patch) using a common vacuum apparatus. This leaves the magnetic particles behind. 4. Remove the magnet and add about 50 ml of filtered kerosene or solvent and slosh around a little more. 5. Next, transfer the magnetic particles to another patch.
6. View the patches using the top-light microscope. The first patch will be primarily dirt, polymers, rust, oxides, sludge, and non-ferrous wear metals (e.g., copper, babbitt, aluminum, etc.). The second patch will show particles generated from critical surfaces such as shafts, bearings, and gearing. 7. Refer to a wear particle atlas as required to interpret your findings. This technique is very flexible and provides on-the-spot information. It can be used to verify high particle count, abnormal vibration readings, rising temperatures, or even a suspected failed filter. Visual conformation like this increases your confidence in making decisions and recommendations.
Related Articles
I'm sure that you are well aware of the value brought by oil analysis. Used appropriately, there is little doubt that an effective oil analysis program can help identify lubrication-related failures, often before any significant machine wear has occurred. But as a veteran instructor of oil analysis and lubrication courses, I find all too often that companies miss the boat on oil analysis simply because they don't understand what oil analysis can and can't do. So in the interests of setting the record straight, I present to you what I like to call the "five fallacies" of oil analysis - things that are often overlooked or not understood but vital to the long-term benefits of oil analysis as a conditioning monitoring tool. Fallacy #1: Reservoir sampling is fine. Fact: Oil analysis, just like real estate, is all about location, location, location. While certain homogeneous properties such as viscosity are unchanged no matter where in the system you sample from, the concentration of suspended material such as wear debris, particles and moisture can vary by several orders of magnitude depending on where you take the sample. For maximum effectiveness, you should take samples immediately downstream of the component(s) of
interest or source of contaminant ingression. In fact, in large circulating systems with significant reservoir capacity, the dilution effects alone can render the identification of active machine wear virtually impossible with reservoir sampling. Fallacy #2: Routine oil analysis will always find active machine wear. Fact: In oil analysis, size really does matter. Depending of the wear mode and degree of severity, wear particle sizes are often 5 to 10 microns and larger. So, why does this matter? Size is important because the most commonly used test method to assess active machine wear - elemental spectroscopy - has a limit to the size of particles it can detect. Depending on instrument and methodology, conventional elemental analysis can't detect particles larger than 3 to 8 microns in size, rendering it useless in situations of advanced machine wear, or where the failure mode naturally generates larger particles, such as fatigue or severe sliding wear. Fallacy #3: Particle counting is proactive. Fact: Particle contamination accounts for 60 to 80 percent of all lubricationrelated failures. Because of this, most oil analysis practitioners recommend the use of ISO particle counting to measure fluid cleanliness, believing that particle counting is a proactive means to prevent many failures. But unless you have taken the time to determine exactly how clean each system needs to be and have a plan to address fluid cleanliness levels that are too high, particle counting will have little to no effect at reducing the overall number of machine failures. Fallacy #4: Water is water is water. Fact: Water, in the form of washdown, airborne humidity or from the process itself is a dangerous contaminant. Because of this, all oil analysis labs test for water. However, in many instances, the test methods used by some labs are unable to detect the presence of water until it is five to 10 times higher than recommended for some machines. Like many oil analysis test parameters, labs have a variety of methods they can use to identify water. The diligent oil analysis end-user should insure that the test methods used by their lab meet or exceed the minimum required detection limits for each test parameter. Fallacy #5: Vibration analysis is better at finding failures than oil analysis. Fact: While it's true that some failure mechanisms, such as misalignment, are better detected using vibration, most experts - including those that specialize in vibration analysis - recognize that oil analysis will generally detect active machine wear before vibration analysis. The true value of vibration analysis is its inherent ability to localize the problem (inner race, outer race, cage wear, etc.) rather than any ability to find a problem earlier in the failure cycle. In truth, the combination of oil analysis for early detection coupled with the advanced diagnostic capabilities of vibration analysis make the benefits of these two techniques far greater when treated as teammates rather than opponents.
There you have it - the most misunderstood aspects of oil analysis. Get them wrong and you could be living with a false sense of security. Get them right and you should reap the benefits that many companies get from a well-engineered, reliability-focused oil analysis program. About Mark Barnes Mark Barnes is vice president of Noria Reliability Solutions. In this role, he and his team work on numerous and varied projects in the areas of plant audits and gap analysis, machinery lubrication program design, oil analysis program design, lube PM rationalization and redesign, lubricant storage and handling, contamination control system design, and lubrication and mechanical failure investigations. As a Noria consultant, his client list includes Cargill, Alcoa, International Paper, TXU, Southern Companies, Eaton, BC Hydro and Southern Cal Edison.
Jim Fitch, Noria Corporation Drew Troyer, Noria Corporation Tags: lubricant sampling, oil analysis
Proper oil sampling is critical to an effective oil analysis program. Without a representative sample, further oil analysis endeavors are futile. There are two primary goals in obtaining a representative oil sample. The first goal is to maximize data density. The sample should be taken in a way that ensures there is as much information per milliliter of oil as possible. This information relates to such criteria as cleanliness and dryness of the oil, depletion of additives, and the presence of wear particles being generated by the machine. The second goal is to minimize data disturbance. The sample should be extracted so that the concentration of information is uniform, consistent and representative. It is important to make sure that the sample does not become contaminated during the sampling process. This can distort and disturb the data, making it difficult to distinguish what was originally in the oil from what came into the oil during the sampling process. To ensure good data density and minimum data disturbance in oil sampling, the sampling procedure, sampling device and sampling location should be considered. The procedure by which a sample is drawn is critical to the success of oil analysis. Sampling procedures should be documented and followed uniformly by all members of the oil analysis team. This ensures consistency in oil analysis data and helps to institutionalize oil analysis within the organization. It also provides a recipe for success to new members of the team. The hardware used to extract the sample should not disturb sample quality. It should be easy to use, clean, rugged and cost-effective. In addition, it is important to use the correct bottle type and bottle cleanliness to assure that a representative sample is achieved. A successful oil analysis program requires an investment of time and money to make sure the proper sampling hardware is fitted to the machinery. It is important to understand that not all locations in a machine will produce the same data. Some are far richer in information than
others. In addition, some machines require multiple sampling locations to answer specific questions related to the machines condition, usually on an exception basis.
Figure 1. Highly Turbulent Area Ingression Points. Where possible, sampling ports should be located downstream of the components that wear, and away from areas where particles and moisture ingress. Return lines and drain lines heading back to the tank offer the most representative levels of wear debris and contaminants. Once the fluid reaches the tank, the information becomes diluted. Filtration. Filters and separators are contaminant removers, therefore they can remove valuable data from the oil sample. Sampling valves should be located upstream of filters, separators, dehydrators and settling tanks unless the performance of the filter is being specifically evaluated. Drain Lines. In drain lines where fluids are mixed with air, sampling valves should be located where oil will travel and collect. On horizontal piping, this will be on the underside of the pipe. Sometimes oil traps, like a goose neck, must be installed to concentrate the oil in the area of the sampling port. Circulating systems where there are specific return lines or drain lines back to a reservoir are the best choice for sampling valves (Figure 2).
Figure 2. Return or Drain Line They allow the sample to be taken before the oil returns to the tank and always before it goes through a filter. If the oil is permitted to return to the tank, then the information in the sample becomes diluted, potentially by thousands of gallons of fluid in large lubricating and hydraulic systems. In addition, debris in the reservoir tends to accumulate over weeks and months and may not accurately represent the current condition of the machine.
Figure 3. Pressurized Lines Portable High-Pressure Tap Sampling. The uppermost configuration on Figure 3 is a highpressure zone where a ball valve or needle valve is installed and the outlet is fitted with a piece of stainless steel helical tubing. The purpose of the tubing is to reduce the pressure of the fluid to a safe level before it enters the sampling bottle. A similar effect can be achieved using a small, hand-held pressure reduction valve. Minimess Tap Sampling. This alternative requires installation of a minimess valve, preferably on an elbow. The sampling bottle has a tube fitted with a probe protruding from its cap. The probe attaches to the minimess port allowing the oil to flow into the bottle. There is a vent hole on the cap of the sampling bottle so that when the fluid enters the bottle the air can expel or exhaust from the vent hole. This particular sampling method requires lower pressures (less than 500 psi) for safety. Ball Valve Tap Sampling. This configuration requires the installation of a ball valve on an elbow. When sampling, the valve should be opened and adequately flushed. Extra flushing is required if the exit extension from the valve is uncapped. Once flushed, the sampling bottles cap is removed and a sample is collected from the flow stream before closing the valve. Care should be taken when removing the bottle cap to prevent the entry of contamination. This technique is not suitable for high- pressure applications.
Portable Minimess Tap Sampling. This option requires installing a minimess onto the female half of a standard quick-connect coupling. This assembly is portable. The male half of a quick-connect is permanently fitted to the pressure line of the machine at the desired sampling location. To sample, the portable female half of the quick-connect is screwed or snapped (depending on adapter type) onto the male piece affixed to the machine. As the adapter is threaded onto the minimess valve, a small spring loaded ball is depressed within the minimess valve allowing oil to flow through the valve and into the sample bottle. In many cases, these male quick-connect couplings are preexisting on the equipment. A helical coil or pressure reduction valve, previously described, should be used on high-pressure lines.
Figure 5. Pressure or Feed Line The best place to sample engine crankcase oil is also just before the filter. The sampling valve should be installed between the pump and filter. This sample location is highly preferred over sampling from a drain port or using a vacuum pump and tube inserted down the dipstick port. Many newer model engines come with an appropriately located sample valve right on the filter manifold.
Figure 6. Off-line Sampling Another example of a wet sump involving circulation is shown in Figure 6 where there is a side loop that is often referred to as a kidney loop filter. This off-line circulating system provides an ideal location to install a sampling valve between the pump and filter. A ball valve or a minimess valve can be used so that the fluid under pressure flows easily into the sample bottle without disturbing the operating system or filtration system.
Figure 7. Drain Port Tap Sampling A third option is called drain port vacuum sampling. With this method a minimess valve is installed as previously described, but instead of fluid passing into a sample bottle by gravity, it is assisted by a vacuum sampler. This is particularly helpful where the oil is viscous and difficult to sample through a narrow tube. Still another method for sampling a gearbox or bearing housing is to use a portable oil circulating system such as a filter cart. In this case, the filter cart is attached to the sump (Figure 8).
Figure 8. Portable Off-line Sampling Here the cart circulates the fluid off the bottom of the sump and back into the sump. To keep from cleaning the oil before sampling, the filters must be by-passed using a directional valve. The fluid should become homogenous when it is circulated for about 5 to 15 minutes, depending on the size of the unit, the amount of fluid in the unit, and the flow rate of the filter cart. Once sufficient mixing has occurred, a sample can be taken from the sampling valve (installed between the pump and the filter).
frequency must suffer because of production priorities. Neither situation is ideal. Futhermore, particles begin to settle and stratify according to size and density immediately upon shutdown, compromising the quality of oil analysis. Machine Intrusion. The drop-tube method is intrusive. The machine must be entered to draw a sample. This intrusion introduces the risk of contamination, and there is always the concern that the machine might not be properly restored to run-ready condition before startup. Whenever drop-tube sampling is used, it should be considered a sampling method of last resort. However, there are situations where no other practical method of sampling is available. In situations where drop-tube vacuum sampling must be used on circulating systems, the best sampling location is between the return line and the suction line (Figure 9). This is known as the short circuit.
There is an assortment of sampling bottles that are commonly used in oil analysis. An appropriate bottle needs to be selected for the application and the test that is planned. Several features including size, material and cleanliness must be considered when selecting a sample bottle. A number of different-sized sampling bottles are available. They vary from 50 mL (or about two ounces of fluid) to a more common 100 to 120 mL bottle. The larger bottle is preferred when tests such as particle count and viscosity analysis are required. Where a considerable number of different tests are required, a 200 ml bottle (or two 100 ml bottles) may be required.
It is important to coordinate with the laboratory to select the bottle size that will provide a sufficient volume to conduct all the required tests and leave some extra for storage in case a rerun is necessary. Another consideration in selecting the bottle size is that the entire volume of the bottle should not be filled with fluid during the sampling process. Only a portion of the sampling bottle should be filled. The unfilled portion, called the ullage, is needed to allow proper fluid agitation by the laboratory to restore even distribution of suspended particles and water in the sample. The general guidelines for filling bottles are: Low Viscosity (ISO VG 32 or less) - Fill to about three-fourths of the total volume. Medium Viscosity (ISO VG 32 to ISO VG 100) - Fill to about two-thirds of the total volume. High Viscosity (over ISO VG 100) - Fill to about one-half of the total volume.
Bottles are available in several materials. Plastic polyethylene is one of the most common bottle materials. It is an opaque material similar to a plastic milk jug. This type of sampling bottle presents a drawback because the oil cant be visually examined after the sample is obtained. Important oil properties, such as sediment, darkness, brightness, clarity and color, can be immediately learned from a visual inspection. Another material is PET plastic. It is a completely clear, glass-like material and is available in standard-sized bottles. This plastic is found to be compatible with most types of lubricating oils and hydraulic fluids, including synthetics. Of course, glass bottles are also available. These bottles tend to be more expensive, are heavier, and there is the risk of breakage during the sampling process. One advantage with glass bottles is that they can be cleaned and used over and over. The cleanliness of glass bottles often exceeds that of plastic bottles. One of the most important considerations in selecting a sampling bottle is to make sure it is sufficiently clean. The bottles required cleanliness level should be determined in advance. (See the article titled Bottle Cleanliness: Is a New Standard Needed? in the March-April 2003 issue of Practicing Oil Analysis magazine for additional information on sample bottle cleanliness.)
Conclusion
All oil analysis tools, techniques and diagnostic processes are worthless if the oil sample fails to effectively represent the actual condition of the oil in service in the machine. Proper sampling procedures are the foundation of an effective oil analysis program. Without good sampling procedures, time and money are wasted, and incorrect conclusions based upon faulty data could be reached. To ensure that an oil analysis program is perceived as valuable and to boost confidence in the program, it is important to determine, understand and practice the processes that are necessary to obtain a representative oil sample. Editors Note This article is an abridged version of Chapter 4 from Oil Analysis Basics written by Drew Troyer and Jim Fitch and published by Noria Corporation. More information about the book can be obtained by visiting Noria Corporations online bookstore at www.noria.com/secure/.
Sidebar 1
Immediately after the first baseline samples were taken, a sample point survey was conducted. Shortly thereafter, the reports recommendation of installing pitot tube style sample valves in all of the plants splash-lubricated gearboxes was implemented, in conjunction with a filtration program. At the prescribed time, these gearboxes were resampled, using the new sample valves, and submitted to the lab for analysis. Of the 28 boxes deemed initially to be critical, 22 of 28 were returned as normal. Editors Note The moral of this story is that if you want to get accurate data, particularly where particle counting is a required test, the use of appropriate sample valves is of paramount importance. To receive and act on an analysis report that indicates a critical problem but turns out to be nothing more than poor sampling, is the easiest way to erode confidence in any oil analysis program.