Sei sulla pagina 1di 46

Spectroscopy

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Animation of the dispersion of light as it travels through a triangular prism Spectroscopy was originally the study of the interaction between radiation and matter as a function of wavelength (). In fact, historically, spectroscopy referred to the use of visible light dispersed according to its wavelength, e.g. by a prism. Later the concept was expanded greatly to comprise any measurement of a quantity as a function of either wavelength or frequency. Thus, it also can refer to a response to an alternating field or varying frequency (). A further extension of the scope of the definition added energy (E) as a variable, once the very close relationship E = h for photons was realized (h is the Planck constant). A plot of the response as a function of wavelengthor more commonly frequencyis referred to as a spectrum; see also spectral linewidth. Spectrometry is the spectroscopic technique used to assess the concentration or amount of a given chemical (atomic, molecular, or ionic) species. In this case, the instrument that performs such measurements is a spectrometer, spectrophotometer, or spectrograph. Spectroscopy/spectrometry is often used in physical and analytical chemistry for the identification of substances through the spectrum emitted from or absorbed by them. Spectroscopy/spectrometry is also heavily used in astronomy and remote sensing. Most large telescopes have spectrometers, which are used either to measure the chemical composition and physical properties of astronomical objects or to measure their velocities from the Doppler shift of their spectral lines.

Contents
[hide]

1 Classification of methods

1.1 Nature of excitation measured 1.2 Measurement process 2 Common types o 2.1 Absorption o 2.2 Fluorescence o 2.3 X-ray o 2.4 Flame o 2.5 Visible o 2.6 Ultraviolet o 2.7 Infrared o 2.8 Near Infrared (NIR) o 2.9 Raman o 2.10 Coherent anti-Stokes Raman spectroscopy (CARS) o 2.11 Nuclear magnetic resonance o 2.12 Photoemission o 2.13 Mssbauer 3 Other types 4 Background subtraction 5 Applications 6 See also 7 References
o o

8 External links

[edit] Classification of methods


[edit] Nature of excitation measured
The type of spectroscopy depends on the physical quantity measured. Normally, the quantity that is measured is an intensity, of energy either absorbed or produced.

Electromagnetic spectroscopy involves interactions of matter with electromagnetic radiation, such as light. Electron spectroscopy involves interactions with electron beams. Auger spectroscopy involves inducing the Auger effect with an electron beam. In this case the measurement typically involves the kinetic energy of the electron as variable. Acoustic spectroscopy involves the frequency of sound. Dielectric spectroscopy involves the frequency of an external electrical field Mechanical spectroscopy involves the frequency of an external mechanical stress, e.g. a torsion applied to a piece of material.

[edit] Measurement process


Most spectroscopic methods are differentiated as either atomic or molecular based on whether or not they apply to atoms or molecules. Along with that distinction, they can be classified on the nature of their interaction:

Absorption spectroscopy uses the range of the electromagnetic spectra in which a substance absorbs. This includes atomic absorption spectroscopy and various molecular techniques, such as infrared, ultraviolet-visible and microwave spectroscopy. Emission spectroscopy uses the range of electromagnetic spectra in which a substance radiates (emits). The substance first must absorb energy. This energy can be from a variety of sources, which determines the name of the subsequent emission, like luminescence. Molecular luminescence techniques include spectrofluorimetry. Scattering spectroscopy measures the amount of light that a substance scatters at certain wavelengths, incident angles, and polarization angles. One of the most useful applications of light scattering spectroscopy is Raman spectroscopy.

[edit] Common types


[edit] Absorption
Main article: Absorption spectroscopy Absorption spectroscopy is a technique in which the power of a beam of light measured before and after interaction with a sample is compared. Specific absorption techniques tend to be referred to by the wavelength of radiation measured such as ultraviolet, infrared or microwave absorption spectroscopy. Absorption occurs when the energy of the photons matches the energy difference between two states of the material.

[edit] Fluorescence

Spectrum of light from a fluorescent lamp showing prominent mercury peaks Main article: Fluorescence spectroscopy Fluorescence spectroscopy uses higher energy photons to excite a sample, which will then emit lower energy photons. This technique has become popular for its

biochemical and medical applications, and can be used for confocal microscopy, fluorescence resonance energy transfer, and fluorescence lifetime imaging.

[edit] X-ray
Main articles: X-ray spectroscopy and X-ray crystallography When X-rays of sufficient frequency (energy) interact with a substance, inner shell electrons in the atom are excited to outer empty orbitals, or they may be removed completely, ionizing the atom. The inner shell "hole" will then be filled by electrons from outer orbitals. The energy available in this de-excitation process is emitted as radiation (fluorescence) or will remove other less-bound electrons from the atom (Auger effect). The absorption or emission frequencies (energies) are characteristic of the specific atom. In addition, for a specific atom, small frequency (energy) variations that are characteristic of the chemical bonding occur. With a suitable apparatus, these characteristic X-ray frequencies or Auger electron energies can be measured. X-ray absorption and emission spectroscopy is used in chemistry and material sciences to determine elemental composition and chemical bonding. X-ray crystallography is a scattering process; crystalline materials scatter X-rays at well-defined angles. If the wavelength of the incident X-rays is known, this allows calculation of the distances between planes of atoms within the crystal. The intensities of the scattered X-rays give information about the atomic positions and allow the arrangement of the atoms within the crystal structure to be calculated. However, the X-ray light is then not dispersed according to its wavelength, which is set at a given value, and X-ray diffraction is thus not a spectroscopy.

[edit] Flame
Liquid solution samples are aspirated into a burner or nebulizer/burner combination, desolvated, atomized, and sometimes excited to a higher energy electronic state. The use of a flame during analysis requires fuel and oxidant, typically in the form of gases. Common fuel gases used are acetylene (ethyne) or hydrogen. Common oxidant gases used are oxygen, air, or nitrous oxide. These methods are often capable of analyzing metallic element analytes in the part per million, billion, or possibly lower concentration ranges. Light detectors are needed to detect light with the analysis information coming from the flame.

Atomic Emission Spectroscopy - This method uses flame excitation; atoms are excited from the heat of the flame to emit light. This method commonly uses a total consumption burner with a round burning outlet. A higher temperature flame than atomic absorption spectroscopy (AA) is typically used to produce excitation of analyte atoms. Since analyte atoms are excited by the heat of the flame, no special elemental lamps to shine into the flame are needed. A high resolution polychromator can be used to produce an emission intensity vs. wavelength spectrum over a range of wavelengths showing multiple element excitation lines, meaning multiple elements can be detected in one run. Alternatively, a monochromator can be set at one wavelength to concentrate on analysis of a single element at a certain emission line. Plasma

emission spectroscopy is a more modern version of this method. See Flame emission spectroscopy for more details. Atomic absorption spectroscopy (often called AA) - This method commonly uses a pre-burner nebulizer (or nebulizing chamber) to create a sample mist and a slot-shaped burner that gives a longer pathlength flame. The temperature of the flame is low enough that the flame itself does not excite sample atoms from their ground state. The nebulizer and flame are used to desolvate and atomize the sample, but the excitation of the analyte atoms is done by the use of lamps shining through the flame at various wavelengths for each type of analyte. In AA, the amount of light absorbed after going through the flame determines the amount of analyte in the sample. A graphite furnace for heating the sample to desolvate and atomize is commonly used for greater sensitivity. The graphite furnace method can also analyze some solid or slurry samples. Because of its good sensitivity and selectivity, it is still a commonly used method of analysis for certain trace elements in aqueous (and other liquid) samples. Atomic Fluorescence Spectroscopy - This method commonly uses a burner with a round burning outlet. The flame is used to solvate and atomize the sample, but a lamp shines light at a specific wavelength into the flame to excite the analyte atoms in the flame. The atoms of certain elements can then fluoresce emitting light in a different direction. The intensity of this fluorescing light is used for quantifying the amount of analyte element in the sample. A graphite furnace can also be used for atomic fluorescence spectroscopy. This method is not as commonly used as atomic absorption or plasma emission spectroscopy.

Plasma Emission Spectroscopy In some ways similar to flame atomic emission spectroscopy, it has largely replaced it.

Direct-current plasma (DCP)

A direct-current plasma (DCP) is created by an electrical discharge between two electrodes. A plasma support gas is necessary, and Ar is common. Samples can be deposited on one of the electrodes, or if conducting can make up one electrode.

Glow discharge-optical emission spectrometry (GD-OES) Inductively coupled plasma-atomic emission spectrometry (ICP-AES) Laser Induced Breakdown Spectroscopy (LIBS) (LIBS), also called Laserinduced plasma spectrometry (LIPS) Microwave-induced plasma (MIP)

Spark or arc (emission) spectroscopy - is used for the analysis of metallic elements in solid samples. For non-conductive materials, a sample is ground with graphite powder to make it conductive. In traditional arc spectroscopy methods, a sample of the solid was commonly ground up and destroyed during analysis. An electric arc or spark is passed through the sample, heating the sample to a high temperature to excite the atoms in it. The excited analyte atoms glow, emitting light at various wavelengths

that could be detected by common spectroscopic methods. Since the conditions producing the arc emission typically are not controlled quantitatively, the analysis for the elements is qualitative. Nowadays, the spark sources with controlled discharges under an argon atmosphere allow that this method can be considered eminently quantitative, and its use is widely expanded worldwide through production control laboratories of foundries and steel mills.

[edit] Visible
Many atoms emit or absorb visible light. In order to obtain a fine line spectrum, the atoms must be in a gas phase. This means that the substance has to be vaporised. The spectrum is studied in absorption or emission. Visible absorption spectroscopy is often combined with UV absorption spectroscopy in UV/Vis spectroscopy. Although this form may be uncommon as the human eye is a similar indicator, it still proves useful when distinguishing colours.

[edit] Ultraviolet
All atoms absorb in the Ultraviolet (UV) region because these photons are energetic enough to excite outer electrons. If the frequency is high enough, photoionization takes place. UV spectroscopy is also used in quantifying protein and DNA concentration as well as the ratio of protein to DNA concentration in a solution. Several amino acids usually found in protein, such as tryptophan, absorb light in the 280 nm range and DNA absorbs light in the 260 nm range. For this reason, the ratio of 260/280 nm absorbance is a good general indicator of the relative purity of a solution in terms of these two macromolecules. Reasonable estimates of protein or DNA concentration can also be made this way using Beer's law.

[edit] Infrared
Main article: Infrared spectroscopy Infrared spectroscopy offers the possibility to measure different types of inter atomic bond vibrations at different frequencies. Especially in organic chemistry the analysis of IR absorption spectra shows what type of bonds are present in the sample. It is also an important method for analysing polymers and constituents like fillers, pigments and plasticizers.

[edit] Near Infrared (NIR)


Main article: Near infrared spectroscopy The near infrared NIR range, immediately beyond the visible wavelength range, is especially important for practical applications because of the much greater penetration depth of NIR radiation into the sample than in the case of mid IR spectroscopy range. This allows also large samples to be measured in each scan by NIR spectroscopy, and is currently employed for many practical applications such as: rapid grain analysis, medical diagnosis pharmaceuticals/medicines,[1] biotechnology, genomics analysis, proteomic analysis, interactomics research, inline textile monitoring, food analysis

and chemical imaging/hyperspectral imaging of intact organisms,[2][3][4] plastics, textiles, insect detection, forensic lab application, crime detection and various military applications.

[edit] Raman
Main article: Raman spectroscopy Raman spectroscopy uses the inelastic scattering of light to analyse vibrational and rotational modes of molecules. The resulting 'fingerprints' are an aid to analysis.

[edit] Coherent anti-Stokes Raman spectroscopy (CARS)


Main article: Coherent anti-Stokes Raman spectroscopy CARS is a recent technique that has high sensitivity and powerful applications for in vivo spectroscopy and imaging.[5]

[edit] Nuclear magnetic resonance


Main article: NMR spectroscopy Main article: 2D-FT NMRI and Spectroscopy Nuclear magnetic resonance spectroscopy analyzes the magnetic properties of certain atomic nuclei to determine different electronic local environments of hydrogen, carbon, or other atoms in an organic compound or other compound. This is used to help determine the structure of the compound.

[edit] Photoemission
Main article: Photoemission

[edit] Mssbauer
Transmission or conversion-electron (CEMS) modes of Mssbauer spectroscopy probe the properties of specific isotope nuclei in different atomic environments by analyzing the resonant absorption of characteristic energy gamma-rays known as the Mssbauer effect.

Ultraviolet-visible spectroscopy
From Wikipedia, the free encyclopedia

(Redirected from UV/Vis spectroscopy) Jump to: navigation, search

Beckman DU640 UV/Vis spectrophotometer. Ultraviolet-visible spectroscopy or ultraviolet-visible spectrophotometry (UV-Vis or UV/Vis) refers to absorption spectroscopy in the ultraviolet-visible spectral region. This means it uses light in the visible and adjacent (near-UV and near-infrared (NIR)) ranges. The absorption in the visible range directly affects the perceived color of the chemicals involved. In this region of the electromagnetic spectrum, molecules undergo electronic transitions. This technique is complementary to fluorescence spectroscopy, in that fluorescence deals with transitions from the excited state to the ground state, while absorption measures transitions from the ground state to the excited state.[1]

Contents
[hide]

1 Applications 2 Beer-Lambert law o 2.1 Practical considerations 2.1.1 Spectral bandwidth 2.1.2 Wavelength error 2.1.3 Stray light 2.1.4 Absorption flattening 3 Ultraviolet-visible spectrophotometer 4 See also 5 Notes 6 External links

[edit] Applications

An example of a UV/Vis readout UV/Vis spectroscopy is routinely used in the quantitative determination of solutions of transition metal ions and highly conjugated organic compounds.

Solutions of transition metal ions can be colored (i.e., absorb visible light) because d electrons within the metal atoms can be excited from one electronic state to another. The colour of metal ion solutions is strongly affected by the presence of other species, such as certain anions or ligands. For instance, the colour of a dilute solution of copper sulfate is a very light blue; adding ammonia intensifies the colour and changes the wavelength of maximum absorption (max). Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases. While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement.

The Beer-Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV/Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve. A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared

with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor. The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward-Fieser rules, for instance, are a set of empirical observations used to predict max, the wavelength of the most intense UV/Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV/Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present.

[edit] Beer-Lambert law


Main article: Beer-Lambert law The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer-Lambert law: ,

where A is the measured absorbance, I0 is the intensity of the incident light at a given wavelength, I is the transmitted intensity, L the pathlength through the sample, and c the concentration of the absorbing species. For each species and wavelength, is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of 1 / M * cm or often AU / M * cm. The absorbance and extinction are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm. The Beer-Lambert Law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for example).

[edit] Practical considerations


The Beer-Lambert law has implicit assumptions that must be met experimentally for it to apply. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid.

[edit] Spectral bandwidth


A given spectrometer has a spectral bandwidth that characterizes how monochromatic the light is. If this bandwidth is comparable to the width of the absorption features, then the measured extinction coefficient will be altered. In most reference measurements, the instrument bandwidth is kept below the width of the spectral lines. When a new material is being measured, it may be necessary to test and verify if the bandwidth is sufficiently narrow. Reducing the spectral bandwidth will reduce the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio.

[edit] Wavelength error


In liquids, the extinction coefficient usually changes slowly with wavelength. A peak of the absorbance curve (a wavelength where the absorbance reaches a maximum) is where the rate of change in absorbance with wavelength is smallest. Measurements are usually made at a peak to minimize errors produced by errors in wavelength in the instrument, that is errors due to having a different extinction coefficient than assumed.

[edit] Stray light


See also: Stray light Another important factor is the purity of the light used. The most important factor affecting this is the stray light level of the monochromator [2] . The detector used is broadband, it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear. As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 AU, which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range.

[edit] Absorption flattening


At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer-Lambert law, varying concentration and path length has an equivalent effectdiluting a solution by a factor

of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring. Solutions that are not homogeneous can show deviations from the Beer-Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles.[3] The deviations will be most noticeable under conditions of low concentration and high absorbance. The reference describes a way to correct for this deviation.

[edit] Ultraviolet-visible spectrophotometer


See also: Spectrophotometry The instrument used in ultraviolet-visible spectroscopy is called a UV/Vis spectrophotometer. It measures the intensity of light passing through a sample (I), and compares it to the intensity of light before it passes through the sample (Io). The ratio I / Io is called the transmittance, and is usually expressed as a percentage (%T). The absorbance, A, is based on the transmittance:

A = log(%T / 100%)
The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a Tungsten filament (300-2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190-400 nm) or more recently, light emitting diodes (LED) and Xenon arc lamps[4] for the visible wavelengths. The detector is typically a photodiode or a CCD. Photodiodes are used with monochromators, which filter the light so that only light of a single wavelength reaches the detector. Diffraction gratings are used with CCDs, which collects light of different wavelengths on different pixels.

Diagram of a single-beam UV/Vis spectrophotometer.

A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. Io must be measured by removing the sample. This was the earliest design, but is still in common use in both teaching and industrial labs. In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some doublebeam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken. Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, L, in the Beer-Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths.[5] A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. A standardized spectrum is formed by removing the concentration dependence and determining the extinction coefficient () as a function of wavelength.

[edit] See also

Calibration curve
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article includes a list of references or external links, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations where appropriate.
(October 2008)

A calibration curve plot showing limit of detection (LOD), limit of quantification (LOQ), dynamic range, and limit of linearity (LOL). In analytical chemistry, a calibration curve is a general method for determining the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration.[1] A calibration curve is one approach to the problem of instrument calibration; other approaches may mix the standard into the unknown, giving an internal standard. The calibration curve is a plot of how the instrumental response, the so-called analytical signal, changes with the concentration of the analyte (the substance to be measured). The operator prepares a series of standards across a range of concentrations near the expected concentration of analyte in the unknown. The concentrations of the standards must lie within the working range of the technique (instrumentation) they are using (see figure).[2] Analyzing each of these standards using the chosen technique will produce a series of measurements. For most analyses a plot of instrument response vs. analyte concentration will show a linear relationship. The operator can measure the response of the unknown and, using the calibration curve, can interpolate to find the concentration of analyte.

Contents
[hide]

1 How to create a calibration curve 2 Error in Calibration Curve Results 3 Applications 4 Notes 5 Bibliography

[edit] How to create a calibration curve

The data - the concentrations of the analyte and the instrument response for each standard - can be fit to a straight line, using linear regression analysis. This yields a model described by the equation y = mx + y0, where y is the instrument response, m represents the sensitivity, and y0 is a constant that describes the background. The analyte concentration (x) of unknown samples may be calculated from this equation. Many different variables can be used as the analytical signal. For instance, chromium (III) might be measured using a chemiluminescence method, in an instrument that contains a photomultiplier tube (PMT) as the detector. The detector converts the light produced by the sample into a voltage, which increases with intensity of light. The amount of light measured is the analytical signal. Most analytical techniques use a calibration curve. There are a number of advantages to this approach. First, the calibration curve provides a reliable way to calculate the uncertainty of the concentration calculated from the calibration curve (using the statistics of the least squares line fit to the data). [3] Second, the calibration curve provides data on an empirical relationship. The mechanism for the instrument's response to the analyte may be predicted or understood according to some theoretical model, but most such models have limited value for real samples. (Instrumental response is usually highly dependent on the condition of the analyte, solvents used and impurities it may contain; it could also be affected by external factors such as pressure and temperature.) Many theoretical relationships, such as fluorescence, require the determination of an instrumental constant anyway, by analysis of one or more reference standards; a calibration curve is a convenient extension of this approach. The calibration curve for a particular analyte in a particular (type of) sample provides the empirical relationship needed for those particular measurements. The chief disadvantages are that the standards require a supply of the analyte material, preferably of high purity and in known concentration. (Some analytes - e.g., particular proteins - are extremely difficult to obtain pure in sufficient quantity.)

[edit] Error in Calibration Curve Results


As expected, the concentration of the unknown will have some error which can be calculated from the formula below.[4][5] This formula assumes that a linear relationship is observed for all the standards. It is important to note that the error in the concentration will be minimal if the signal from the unknown lies in the middle of the signals of all the standards (the term goes to zero if )

sy is the standard deviation in the residuals Linear least squares#Residual


values and correlation m is the slope of the line b is the y-intercept of the line n is the number standards k is the number of replicate unknowns yunknown is the measurement of the unknown is the average measurement of the standards xi are the concentrations of the standards is the average concentration of the standards

[edit] Applications

Analysis of concentration Verifying the proper functioning of an analytical instrument or a sensor device such as an ion selective electrode Determining the basic effects of a control treatment (such as a dose-survival curve in clonogenic assay)

Molar absorptivity
From Wikipedia, the free encyclopedia

(Redirected from Molar extinction coefficients) Jump to: navigation, search The Molar absorption coefficient, molar extinction coefficient, or molar absorptivity, is a measurement of how strongly a chemical species absorbs light at a given wavelength. It is an intrinsic property of the species; the actual absorbance, A, of a sample is dependent on the pathlength l and the concentration c of the species via the Beer-Lambert law, A = cl. The SI units for are m2/mol, but in practice, they are usually taken as M-1 cm-1 or L mol-1 cm-1. In older literature, cm2/mole is used. These units look very different, but it is just a matter of expressing volume in cm3 or mL. Different disciplines have different conventions as to whether absorbance is Naperian or decadic, i.e. defined with respect to the transmission via natural or common logarithm. The molar absorption coefficient is usually decadic,[1] but when ambiguity exists it is best to qualify it as such. The molar extinction coefficient should not be confused with the different definition of "extinction coefficient" used more commonly in physics, namely the imaginary

part of the complex index of refraction (which is unitless). In fact, they have a straightforward but nontrivial relationship; see Mathematical descriptions of opacity. In biochemistry, the extinction coefficient of a protein at 280 nm depends almost exclusively on the number of aromatic residues, particularly tryptophan, and can be predicted from the sequence of amino acids.[2] If the extinction coefficient is known, it can be used to determine the concentration of a protein in solution. When there is more than one absorbing species in a solution, the overall absorbance is the sum of the absorbances for each individual species (X, Y etc.): , The composition of a mixture of N components can be found by measuring the absorbance at N wavelengths (the values of for each compound at these wavelengths must also be known). The wavelengths chosen are usually the wavelengths of maximum absorption (absorbance maxima) for the individual components. None of the wavelengths must be an isosbestic point for a pair of species. For N components with concentrations ci and wavelengths i, absorbances A(i) are obtained:

. This set of simultaneous equations can be solved to find concentrations of each absorbing species. The molar extinction coefficient (if expressed in units of L mol-1 cm-1) is directly related to the Absorption cross section (in units of cm2) via the Avogadro constant[3]:

. The molar absorptivity is also closely related to the mass attenuation coefficient, by the equation (Mass attenuation coefficient)(Molar mass) = (Molar absorptivity).

Atomic absorption spectroscopy


From Wikipedia, the free encyclopedia

Jump to: navigation, search

Atomic absorption spectrometer In analytical chemistry, atomic absorption spectroscopy is a technique used to determine the concentration of a specific metal element in a sample.[1] The technique can be used to analyze the concentration of over 70 different metals in a solution. Although atomic absorption spectroscopy dates to the nineteenth century, the modern form was largely developed during the 1950s by a team of Australian chemists. They were led by Alan Walsh and worked at the CSIRO (Commonwealth Science and Industry Research Organisation) Division of Chemical Physics in Melbourne, Australia.[2]

Contents
[hide]

1 Principles 2 Instrumentation o 2.1 Types of atomizer 2.1.1 Analysis of liquids o 2.2 Radiation sources 2.2.1 Hollow cathode lamps 2.2.2 Diode lasers 3 Background correction methods 4 Modern developments 5 Alternatives 6 References 7 See also

[edit] Principles
The technique makes use of absorption spectrometry to assess the concentration of an analyte in a sample. It relies therefore heavily on Beer-Lambert law. In short, the electrons of the atoms in the atomizer can be promoted to higher orbitals for a short amount of time by absorbing a set quantity of energy (i.e. light of a given wavelength). This amount of energy (or wavelength) is specific to a particular

electron transition in a particular element, and in general, each wavelength corresponds to only one element. This gives the technique its elemental selectivity. As the quantity of energy (the power) put into the flame is known, and the quantity remaining at the other side (at the detector) can be measured, it is possible, from BeerLambert law, to calculate how many of these transitions took place, and thus get a signal that is proportional to the concentration of the element being measured.

[edit] Instrumentation

Atomic absorption spectrometer block diagram In order to analyze a sample for its atomic constituents, it has to be atomized. The sample should then be illuminated by light. The light transmitted is finally measured by a detector. In order to reduce the effect of emission from the atomizer (e.g. the black body radiation) or the environment, a spectrometer is normally used between the atomizer and the detector.

[edit] Types of atomizer


The technique typically makes use of a flame to atomize the sample,[3] but other atomizers such as a graphite furnace[4] or plasmas, primarily inductively coupled plasmas, are also used.[5] When a flame is used it is laterally long (usually 10 cm) and not deep. The height of the flame above the burner head can be controlled by adjusting the flow of the fuel mixture. A beam of light passes through this flame at its longest axis (the lateral axis) and hits a detector.

[edit] Analysis of liquids


A liquid sample is normally turned into an atomic gas in three steps: 1. Desolvation (Drying) the liquid solvent is evaporated, and the dry sample remains 2. Vaporization (Ashing) the solid sample vaporises to a gas 3. Atomization the compounds making up the sample are broken into free atoms.

[edit] Radiation sources

The radiation source chosen has a spectral width narrower than that of the atomic transitions.

[edit] Hollow cathode lamps


Hollow cathode lamps are the most common radiation source in atomic absorption spectroscopy. Inside the lamp, filled with argon or neon gas, is a cylindrical metal cathode containing the metal for excitation, and an anode. When a high voltage is applied across the anode and cathode, gas particles are ionized. As voltage is increased, gaseous ions acquire enough energy to eject metal atoms from the cathode. Some of these atoms are in an excited states and emit light with the frequency characteristic to the metal[6]. Many modern hollow cathode lamps are selective for several metals.

[edit] Diode lasers


Atomic absorption spectroscopy can also be performed by lasers, primarily diode lasers because of their good properties for laser absorption spectrometry.[7] The technique is then either referred to as diode laser atomic absorption spectrometry (DLAAS or DLAS),[8] or, since wavelength modulation most often is employed, wavelength modulation absorption spectrometry.

[edit] Background correction methods


The narrow bandwidth of hollow cathode lamps make spectral overlap rare. That is, it is unlikely that an absorption line from one element will overlap with another. Molecular emission is much broader, so it is more likely that some molecular absorption band will overlap with an atomic line. This can result in artificially high absorption and an improperly high calculation for the concentration in the solution. Three methods are typically used to correct for this:

Zeeman correction - A magnetic field is used to split the atomic line into two sidebands (see Zeeman effect). These sidebands are close enough to the original wavelength to still overlap with molecular bands, but are far enough not to overlap with the atomic bands. The absorption in the presence and absence of a magnetic field can be compared, the difference being the atomic absorption of interest. Smith-Hieftje correction (invented by Stanley B. Smith and Gary M. Hieftje) The hollow cathode lamp is pulsed with high current, causing a larger atom population and self-absorption during the pulses. This self-absorption causes a broadening of the line and a reduction of the line intensity at the original wavelength.[9] Deuterium lamp correction - In this case, a separate source (a deuterium lamp) with broad emission is used to measure the background emission. The use of a separate lamp makes this method the least accurate, but its relative simplicity (and the fact that it is the oldest of the three) makes it the most commonly used method.

[edit] Modern developments

Continuum Source AAS Recent modern developments in electronics and solid state detectors have taken the conventional AAS instrument to the next level. High Resolution Continuum Source AAS (HR-CS AAS) is now available in both flame and graphite furnace mode. Main features of the new instruments:

Single xenon arc lamp - Today, multiple hollow cathode lamps are no longer used. With the use of a single xenon arc lamp, all the elements can be measured from 185-900nm. This takes AAS into a true multi-element technique with the analysis of 10 elements per minute. CCD technology - For the first time in an AAS CCD chips are now available with 200 pixels which act as independent detectors. Simultaneous background correction - Background is now measured simultaneously compared to sequential background on conventional AAS. Multiple lines - Extra lines of an analyte are now available thus extending the dynamic working range. Better detection limits - Due to the high intensity of the Xenon Lamp there is better signal/noise ratio thus giving better detection limits. In some cases it is up to 10 times better than conventional AAS. Direct analysis of solids - In graphite furnace mode it is now possible to analyse solids directly thus avoiding long digestion times. Ability to measure sulfur and halogens - It is now possible to measure some non-metals by measuring molecular bands.

Oil Sampling Do's and Don'ts

Jason Kopschinsky, Noria Corporation Tags: lubricant sampling, oil analysis

People love dos and donts lists. A quick Google search will yield 10.9 million hits for what to do and not do. A quick scan through the endless supply of D&D lists will show that many of the subjects people feel the need on which to provide unsolicited consulting really dont have a defined method of approach beyond common sense. For example, the dos and donts of air travel barely stretch outside the realm of common sense. Advice such as Do not place your firearm in your carry-on luggage or Do not smoke while in the aircraft goes without saying. Then there are the do and do-not-do lists for topics that are highly subjective such as fashion (Dont wear white after Labor Day). Thankfully, in the realm of oil analysis and machinery lubrication, few dos and donts can be considered subjective. In this case, were talking about what to do and not do related to oil sampling for analysis. These simple rules will make or break the integrity of your sample, which is meant to drive your maintenance and reliability decisions.

Follow the Rules


Oil analysis is a condition monitoring tool designed to monitor: fluid properties, or the condition of the oil and the additives; fluid contamination; and, machine wear.

However, the analysis of a sample greatly depends on the quality of the sample itself. A high-quality sample translates into one that is rich with data and free from noise. The content of this article is nothing new. Dozens (if not hundreds) of articles, papers and books have had some advice for us to follow when extracting a sample of oil from a machine for analysis. However, as an industry, we dont seem to get it right. The same rules for oil sampling still apply, just like they always did. Here is the most recent do and do-not-do list for oil sampling from my perspective. 1) DO sample from running machines. DO NOT sample cold systems. This rule goes beyond simply starting the machine to take the sample. The ideology behind oil analysis is to capture a snapshot of the system at the time of sampling. The timing of the sampling should be when the system is under the greatest amount of stress. Typically, the best time to sample a system is when the system is under normal working load and normal conditions. This can be a tricky task when sampling from a system that continuously cycles during normal

production, such as the hydraulic system on an injection molding machine. Its under these conditions that well capture a sample that best represents the machine conditions most likely to cause accelerated wear. 2) DO sample upstream of filters and downstream of machine components. Filters are designed to pull out wear debris and contaminants, so sampling downstream of these data-strippers provides no value. However, taking a sample before and after a filter for a simple particle count will allow you to see how well the filter is currently operating. Obviously, we expect the particle count before the filter to be higher than after the filter. If its not, its time to change the filter. Condition-based filter changes can be very important for sensitive systems and expensive filters. 3) DO create specific written procedures for each system sampled. DO NOT change sampling methods or locations. Everything we do in oil analysis and machinery lubrication should have a detailed procedure to back up the task. Each maintenance point in the plant should have specific and unique procedures detailing who, what, where, when and how. Oil sampling procedures are no different. We need to identify the sample location, the amount of flush volume, the frequency of sampling, the timing within a cycle to sample, and indicate what tools and accessories to use on that specific sample point based on lubricant type, pressure and amount of fluid required. 4) DO ensure that sampling valves and sampling devices are thoroughly flushed prior to taking the sample. DO NOT use dirty sampling equipment or reuse sample tubing. Cross-contamination has always been a problem in oil sampling. The truth of the matter is that flushing is an important task that is often overlooked. Failure to flush the sample location properly will produce a sample with a high degree of noise. Flushing prior to sampling needs to account for the amount of dead space between the sample valve and the active system multiplied by a factor of 10. If there is a run of pipe 12 inches long between the sample valve and the active system that holds one fluid ounce of oil, you need to flush a minimum of 10 fluid ounces before taking the sample for analysis. Flushing the dead space also will flush your other accessories such as your sample valve adapter and new tubing. 5) DO ensure that samples are taken at proper frequencies. DO NOT sample as time permits. Many of those responsible for taking oil samples rarely see the results of the analysis. One of the most powerful aspects of oil analysis is identifying a change in the baseline of a sample and understanding the rate at which the change has occurred. For example, a sample of new oil should have zero parts per million (ppm) of iron when tested as the baseline. As regular sampling and analysis continues, we may see the iron level increase. An increase of 10 or 12 ppm per sample may be considered critical; however, if the frequency is not consistent, what is considered normal becomes very subjective. If our frequency of sampling is 12 months, a rise in iron of 12 ppm isnt a major cause of concern. If our frequency is weekly, a rise in iron of 12 ppm is very concerning. Setting up the appropriate sampling frequency and adhering to it will allow for precise analysis and sound maintenance decisions.

6) DO forward samples immediately to the oil analysis lab after sampling. DO NOT wait more than 24 hours to send samples out. As mentioned earlier, oil sampling is much like taking a snapshot of your system at a point in time. The health of a lubricated system can change dramatically in a very short period of time. If a problem is detected in a system, the earlier it is detected, the less catastrophic potential it may have. Jumping on a problem early will not only allow you time to plan for a repair, but the repair will potentially be less significant.

Apply Them Today


This dos and donts list for oil sampling could go on and on. Each and every system will have unique and specific considerations for what to do and not do when sampling. The tips in this article will provide big bang for your buck and can be applied to most systems in your sampling program. About the Author As technical operations director for Noria Reliability Solutions, Jason Kopschinskys primary responsibilities include managing numerous and varied projects in the areas of: plant audits and gap analysis, Lubrication Process Design, oil analysis program design, lube PM rationalization and redesign, lubricant storage and handling, contamination control system design, and lubrication and mechanical failure investigations. Contact Jason at jkopschinsky@noria.com.

Related Articles

A New Approach to Assessing Wear Problems Using Oil Analysis

Darrin Clark, Insight Services Tags: oil analysis

It has been well discussed that oil analysis can effectively monitor three parameters of lubrication. First and foremost, many tests monitor the health of a lubricant. These tests are pretty straightforward, and the results are generally compared to the properties of the new lubricant to gauge how much the in-service lubricant has changed. It is common practice to set condemning limits or monitor the trends for significant changes. Contamination levels are also monitored using oil analysis, and cleanliness targets are usually used to trigger maintenance activities to bring the lubricants back to acceptable levels of contamination. A program based on proactive maintenance principles, which monitors and corrects the parameters mentioned above, will significantly reduce the likelihood of machine wear - the third parameter that can be effectively monitored by oil analysis. While it is certainly true that maintaining a healthy, clean lubricant will minimize machine wear, there are still many wear modes that can arise in spite of these efforts. Misalignment, imbalance, overloading, improper installation, fatigue the list goes on. Abnormal wear, for whatever reason, happens more often than maintenance professionals like to think. Therefore, it is essential to have a strategy in place to monitor machine wear. Oil analysis remains the best tool in the predictive maintenance toolbox for the early detection of wear problems. Wear metal and wear particle levels will begin to increase well before the machine exhibits symptoms, such as vibration, temperature or noise. It is difficult, however, to determine the correct wear metal level thresholds. This is particularly true in industrial applications where equipment categories traditionally used are so general. The following gearbox example reinforces this point. The question How much iron is too much in a gear box? sounds simple. However, when the many different sizes, types, loads, environments and applications are included in that question, it becomes more complex. If the many lubrication systems and lubricant types in use are added to this simple question, it becomes much more complicated. Is it realistic to think that there could be a good answer to such a question? Probably not. Yet in most cases, this is exactly the type of question that is being asked each time an oil sample is taken. If an oil analysis program is expected to detect machine wear problems effectively, better questions must be asked. What really needs to be determined is what is normal. Therefore, normal must be defined. According to Websters Dictionary, normal means conforming to a usual or typical pattern. How can a pattern in a broad category such as gearboxes be identified? The answer is fairly simple; by evaluating as much data as we possibly can. Before continuing, a review of how wear metals have traditionally been evaluated in the past is needed.

Fixed Limits
Many programs have used fixed limits, giving simple pass or fail criteria for each wear metal. Table 1 is an example of what fixed alarm limits might look like.

The drawback to this type of alarming technique is that it does not account for different contributing factors. Gearboxes come in many sizes and shapes. Some gearboxes are lightly loaded and run at constant speed, which would lend such a gearbox to a low wear rate. It might be in serious trouble if the iron level reaches 200 ppm. On the other side of the spectrum, the gearbox could be a low-speed, reversing, heavily loaded gearbox that hasnt had less than 500 ppm of iron in its oil since it was tested at the assembly plant. The lubrication method can have a large impact on wear metal levels as well. Many gearboxes are splash lubricated and hold only a small oil volume. As such, wear metals will build in the lubricant as time goes on. This situation would reveal a steadily increasing wear metal level and cause a false positive reading when the level broached the fixed alarm. Other gearboxes might be lubricated by a highly filtered circulating system, where wear particles are removed by filtration as rapidly as they are generated. In this case, the wear metal trend might be flat, and a significant change could occur without surpassing the fixed alarm. Such an exception would likely be missed by a fixed limit system.

Trend Analysis
Trend analysis allows the development of a pattern of behavior for a particular unit. If the sampling technique and interval are consistent, regular monitoring of the wear metal levels can effectively monitor for changes in the wear rate. This helps account for many of the variables within the equipment group. An uncharacteristic increase in iron, for example, would indicate a change in the wear rate. Many techniques can be applied to evaluating trend data, such as averages, standard deviations and linear regression. All are intended to identify a condition that is not normal in relation to the machines past behavior. What is missing here is identifying what is normal for that machine type. Is it normal for a gearbox like this to generate this level of iron?

Figure 1. Trend Analysis

Family Analysis
The answer to that question can be found using family analysis. This is a technique that compares the wear metal levels of groups of similar or identical equipment to identify a usual or typical pattern. It works like this: Equipment is grouped together by family. A family may consist of identical equipment located in many plants, such as GE Frame 7 gas turbines in many power plants across the country. Equipment might also be grouped based on load, size, lubricant type and operating parameters, such as a group of agitators at a chemical plant. The wear metal data is then evaluated as a whole. Next, the data for each machine is compared to the family wear rate. As an example, lets say that we have a family of 50 motor bearings at a steel mill. The average tin reading is 7 ppm with 90 percent of the bearings reading less than 10 ppm. It would then be safe to assume that it is normal for these bearings to have less than 10 ppm tin in their oil. If one of the bearings were found to have 35 ppm of tin, it would be safe to say that its wear rate is abnormal. An effort could then be initiated to determine the cause of the higher wear rate and correct the problem. The problem can be detected, identified and resolved before the damage occurs, saving a premature bearing failure and replacement costs. Family analysis techniques can have a significant impact on both large and small companies programs. A large company could use such a program to monitor large fleets of similar equipment among its plants, as well as to benchmark performance of individual plants. Companies with less equipment can compare their wear rates to equipment in many other plants, and take advantage of the labs vast database of equipment data.

Blending the Techniques


Realistically, the ideal analysis program would be a blend of the three analysis techniques discussed here. The data evaluation process would become most cumbersome if each of these were applied to each wear metal for each machine tested in a program. This is where information technology systems come into play. Computers can automate this process so that each parameter is evaluated using numerous techniques, and the best possible analysis is obtained.

Computers are now capable of using statistical calculations, database mining and a rulebased knowledge hierarchy to compare the test data to fixed limits, trend analysis and family analysis, and they can select the most appropriate evaluation for each application.

Wear Particle Analysis - Interpretation of Wear Metal Data for In-Service Oil Analysis Using the Spectroil Optical Spectrometer by Spectro Inc.
Topics Covered
Background Spectroil Optical Spectrometer for In-Service Oil Analysis Guidelines for Wear Metal and Contaminant Analysis Case Study - An Effective Spectrometric In-Service Oil Analysis Program for Wear Metal Data Interpretation Conditions for Wear Metal Analysis Optical Spectrometer for Analyzing Wear Metals and Contaminants in Oil Samples Examples of Wear Metals and Contaminants Detection with Spectroil Optical Spectrometer Summary

Background
Spectro designs, manufactures, sells and services analytical instrumentation. Their mainstays are optical emission spectrometers used in predictive maintenance of mechanical systems based on oil analysis. Used oil analysis is the base for a predictive maintenance program. Spectro customers are industrial or military organizations that have oil-lubricated machines or engines. A second market consists of instruments for the analysis of contaminants in gas turbine fuels. Through the study of lubrication, friction and wear they are able to predict possible future problems of systems.

Spectroil Optical Spectrometer for In-Service Oil Analysis


In an in-service oil analysis program, lubricating oil is used as a diagnostic medium because as it circulates through an oil-wetted system, it carries with it debris from wearing surfaces, as well as contaminants from internal and external sources. Oil samples are taken periodically from the system and sent to a laboratory for analysis.

The analysis of the in-service lubricant is then used to determine the condition of the system. The analytical data from an oil sample is reviewed and evaluated either manually by a data analyst, or in many instances, semiautomatically or automatically with specialized software such as Labtrak/Prescient. Wear trends may be normal requiring no corrective action, exhibit early signs of abnormal wear which may require more frequent sampling, or are abnormal resulting in a recommendation to take corrective maintenance action. The evaluation process is based on a knowledge of the metallurgy of the system being monitored and the fluids used for lubrication within the system.

Guidelines for Wear Metal and Contaminant Analysis


Wear metal and contaminant guidelines are established for a particular piece of equipment, but a variety of other important factors and variables must be also be taken into account.

Equipment operating conditions are a prime factor. The operating environment is also important. For example, a desert location usually causes an increase in silicon readings accompanied by a corresponding increase in wear. Time since last oil change and oil consumption will affect readings and possibly disguise a wear trend. The length of time the equipment is in service is extremely important. During the engine break-in period, either when new or after overhaul, wear metal concentrations are abnormally high but are usually no cause for alarm. If equipment is left to stand idle for long periods of time, rust can form and iron readings will increase. Older systems typically generate more wear metals than fairly new ones of the same model. Load on the engine is also a factor, particularly changes in load; increases in wear may be due to an additional load placed on the engine. The chemical composition of the oil and coolant are also important. Metals present may not be due to wear at all, but rather due to an oil additive or coolant leak

Case Study - An Effective Spectrometric In-Service Oil Analysis Program for Wear Metal Data Interpretation
An effective spectrometric oil analysis program is dependent upon interpretation of the analytical data on wear metals, contaminants and additives as measured by a spectrometer. The interpretation of analytical results is an evaluation of the maintenance status of an oilwetted system and consists of the laboratorys recommended service action.

Conditions for Wear Metal Analysis


Wear metal analysis is the backbone of machine condition monitoring programs based on in-service oil analysis. It is effective in the sense that tests can be applied to determine that a system is normal, is nearing a failure mode, or has reached a failure mode. Further damage can thus be contained or avoided through immediate shutdown and repair.

Optical Spectrometer for Analyzing Wear Metals and Contaminants in Oil Samples
The spectrometer used to analyze an oil sample is capable to detect and quantify the wear metals and contaminates present in the oil sample. For example, if only iron and aluminum are present in abnormal amounts, the analyst's job is much simpler. The entire system does not have to be torn down and inspected; the inspection can be restricted to those components made up of iron and aluminum. Knowing the relative concentrations of the elements will further narrow down their possible source.

Examples of Wear Metals and Contaminants Detection with Spectroil Optical Spectrometer
1. An increase in silver and nickel in a certain type of railroad diesel is indicative of bearing wear. If detected early enough, a relatively simple bearing replacement can be made, rather than a $50,000-$100,000 overhaul and crankshaft replacement. 2. An increase in the amount of silicon in conjunction with a corresponding increase in iron, aluminum, and chromium as shown in the adjacent figure is probably caused by dirt ingestion. Air filter replacement and oil change may be the only maintenance action required. However, an increase of silicon alone may mean the oil type was changed to one containing a silicon-based anti-foaming agent and no maintenance action is required. The same trends without an increase of silicon could mean piston wear if the sample came from an internal combustion engine.

3. Sometimes even the slightest increase or presence of an element can be cause for alarm. The bearing shown in the adjacent figure was removed from the gearbox of an aircraft. The presence of only 2 ppm (parts-per-million) of copper was sufficient to warrant maintenance action. The source of the copper was the bronze bearing cage.

4. A trend showing the presence of boron in the lubricating oil of most water-cooled systems would indicate a coolant leak. If left unchecked, the coolant combines with combustion products and forms harmful acids and sludge which attack metal and reduce the ability of the oil to properly lubricate.

Summary
Wear metal analysis determines the condition of the machine, and not the condition of the lubricant. An effective in-service oil analysis program thus would include lubricant physical property analysis to determine lubricant degradation and contamination. The data from these additional tests can then be used to determine if a lubricant is still performing as specified or needs to be changed. This additional

capability makes oil analysis even more cost effective and popular in today's unpredictable oil market. The net benefits of a good in-service oil analysis program are:
reduce maintenance costs, increase equipment availability, reduce lubricant usage, and improve safety.

A Quick Test for Wear Debris in Oil

Drew Troyer, Noria Corporation Tags: wear debris analysis, oil analysis

Sometimes it is necessary to quickly determine if a machine is generating an unusual amount of wear debris. One way to accomplish this is to simply pull a patch and look at the particles with a simple top-light microscope. Wear particles tend to be shiny because they reflect light, especially freshly generated particles that have not had a chance to oxidize. Sometimes, however, one needs to separate the wear particles from the dirt particles to get a clearer view. Here is an easy on-site method for separating magnetic debris (e.g., iron and steel) that is quick and inexpensive. Once separated, the particles can be viewed under an inexpensive field microscope for evaluation. 1. Mix a measured amount of oil with kerosene (or other suitable solvent) about 50/50 in a flat- bottomed flask or beaker. Be sure the kerosene is dispensed through a filtered dispensing bottle. 2. Hold a disc magnet tightly to the flask bottom and slosh around the mixture for three minutes. 3. Without removing the magnet, decant the liquid and non-magnetic debris out of the flask through a membrane (patch) using a common vacuum apparatus. This leaves the magnetic particles behind. 4. Remove the magnet and add about 50 ml of filtered kerosene or solvent and slosh around a little more. 5. Next, transfer the magnetic particles to another patch.

6. View the patches using the top-light microscope. The first patch will be primarily dirt, polymers, rust, oxides, sludge, and non-ferrous wear metals (e.g., copper, babbitt, aluminum, etc.). The second patch will show particles generated from critical surfaces such as shafts, bearings, and gearing. 7. Refer to a wear particle atlas as required to interpret your findings. This technique is very flexible and provides on-the-spot information. It can be used to verify high particle count, abnormal vibration readings, rising temperatures, or even a suspected failed filter. Visual conformation like this increases your confidence in making decisions and recommendations.

Related Articles

Lead Particles Predict Bearing Failure

Oil Analysis: Five Things You Didn't Know

Mark Barnes, Noria Corporation Tags: oil analysis

I'm sure that you are well aware of the value brought by oil analysis. Used appropriately, there is little doubt that an effective oil analysis program can help identify lubrication-related failures, often before any significant machine wear has occurred. But as a veteran instructor of oil analysis and lubrication courses, I find all too often that companies miss the boat on oil analysis simply because they don't understand what oil analysis can and can't do. So in the interests of setting the record straight, I present to you what I like to call the "five fallacies" of oil analysis - things that are often overlooked or not understood but vital to the long-term benefits of oil analysis as a conditioning monitoring tool. Fallacy #1: Reservoir sampling is fine. Fact: Oil analysis, just like real estate, is all about location, location, location. While certain homogeneous properties such as viscosity are unchanged no matter where in the system you sample from, the concentration of suspended material such as wear debris, particles and moisture can vary by several orders of magnitude depending on where you take the sample. For maximum effectiveness, you should take samples immediately downstream of the component(s) of

interest or source of contaminant ingression. In fact, in large circulating systems with significant reservoir capacity, the dilution effects alone can render the identification of active machine wear virtually impossible with reservoir sampling. Fallacy #2: Routine oil analysis will always find active machine wear. Fact: In oil analysis, size really does matter. Depending of the wear mode and degree of severity, wear particle sizes are often 5 to 10 microns and larger. So, why does this matter? Size is important because the most commonly used test method to assess active machine wear - elemental spectroscopy - has a limit to the size of particles it can detect. Depending on instrument and methodology, conventional elemental analysis can't detect particles larger than 3 to 8 microns in size, rendering it useless in situations of advanced machine wear, or where the failure mode naturally generates larger particles, such as fatigue or severe sliding wear. Fallacy #3: Particle counting is proactive. Fact: Particle contamination accounts for 60 to 80 percent of all lubricationrelated failures. Because of this, most oil analysis practitioners recommend the use of ISO particle counting to measure fluid cleanliness, believing that particle counting is a proactive means to prevent many failures. But unless you have taken the time to determine exactly how clean each system needs to be and have a plan to address fluid cleanliness levels that are too high, particle counting will have little to no effect at reducing the overall number of machine failures. Fallacy #4: Water is water is water. Fact: Water, in the form of washdown, airborne humidity or from the process itself is a dangerous contaminant. Because of this, all oil analysis labs test for water. However, in many instances, the test methods used by some labs are unable to detect the presence of water until it is five to 10 times higher than recommended for some machines. Like many oil analysis test parameters, labs have a variety of methods they can use to identify water. The diligent oil analysis end-user should insure that the test methods used by their lab meet or exceed the minimum required detection limits for each test parameter. Fallacy #5: Vibration analysis is better at finding failures than oil analysis. Fact: While it's true that some failure mechanisms, such as misalignment, are better detected using vibration, most experts - including those that specialize in vibration analysis - recognize that oil analysis will generally detect active machine wear before vibration analysis. The true value of vibration analysis is its inherent ability to localize the problem (inner race, outer race, cage wear, etc.) rather than any ability to find a problem earlier in the failure cycle. In truth, the combination of oil analysis for early detection coupled with the advanced diagnostic capabilities of vibration analysis make the benefits of these two techniques far greater when treated as teammates rather than opponents.

There you have it - the most misunderstood aspects of oil analysis. Get them wrong and you could be living with a false sense of security. Get them right and you should reap the benefits that many companies get from a well-engineered, reliability-focused oil analysis program. About Mark Barnes Mark Barnes is vice president of Noria Reliability Solutions. In this role, he and his team work on numerous and varied projects in the areas of plant audits and gap analysis, machinery lubrication program design, oil analysis program design, lube PM rationalization and redesign, lubricant storage and handling, contamination control system design, and lubrication and mechanical failure investigations. As a Noria consultant, his client list includes Cargill, Alcoa, International Paper, TXU, Southern Companies, Eaton, BC Hydro and Southern Cal Edison.

The Basics of Used Oil Sampling

Jim Fitch, Noria Corporation Drew Troyer, Noria Corporation Tags: lubricant sampling, oil analysis

Proper oil sampling is critical to an effective oil analysis program. Without a representative sample, further oil analysis endeavors are futile. There are two primary goals in obtaining a representative oil sample. The first goal is to maximize data density. The sample should be taken in a way that ensures there is as much information per milliliter of oil as possible. This information relates to such criteria as cleanliness and dryness of the oil, depletion of additives, and the presence of wear particles being generated by the machine. The second goal is to minimize data disturbance. The sample should be extracted so that the concentration of information is uniform, consistent and representative. It is important to make sure that the sample does not become contaminated during the sampling process. This can distort and disturb the data, making it difficult to distinguish what was originally in the oil from what came into the oil during the sampling process. To ensure good data density and minimum data disturbance in oil sampling, the sampling procedure, sampling device and sampling location should be considered. The procedure by which a sample is drawn is critical to the success of oil analysis. Sampling procedures should be documented and followed uniformly by all members of the oil analysis team. This ensures consistency in oil analysis data and helps to institutionalize oil analysis within the organization. It also provides a recipe for success to new members of the team. The hardware used to extract the sample should not disturb sample quality. It should be easy to use, clean, rugged and cost-effective. In addition, it is important to use the correct bottle type and bottle cleanliness to assure that a representative sample is achieved. A successful oil analysis program requires an investment of time and money to make sure the proper sampling hardware is fitted to the machinery. It is important to understand that not all locations in a machine will produce the same data. Some are far richer in information than

others. In addition, some machines require multiple sampling locations to answer specific questions related to the machines condition, usually on an exception basis.

Sampling on System Returns


There are several rules for properly locating oil sampling ports on circulating systems. These rules cannot always be precisely followed because of various constraints in the machines design, application and plant environment. However, the rules outlined below should be followed as closely as possible: Turbulence. The best sampling locations are highly turbulent areas where the oil is not flowing in a straight line but is turning and rolling in the pipe. Sampling valves located at right angles to the flow path in long straight sections of pipe can result in particle fly-by, which basically leads to a substantial reduction of the particle concentration entering the sample bottle. This can be avoided by locating sampling valves at elbows and sharp bends in the flow line (Figure 1).

Figure 1. Highly Turbulent Area Ingression Points. Where possible, sampling ports should be located downstream of the components that wear, and away from areas where particles and moisture ingress. Return lines and drain lines heading back to the tank offer the most representative levels of wear debris and contaminants. Once the fluid reaches the tank, the information becomes diluted. Filtration. Filters and separators are contaminant removers, therefore they can remove valuable data from the oil sample. Sampling valves should be located upstream of filters, separators, dehydrators and settling tanks unless the performance of the filter is being specifically evaluated. Drain Lines. In drain lines where fluids are mixed with air, sampling valves should be located where oil will travel and collect. On horizontal piping, this will be on the underside of the pipe. Sometimes oil traps, like a goose neck, must be installed to concentrate the oil in the area of the sampling port. Circulating systems where there are specific return lines or drain lines back to a reservoir are the best choice for sampling valves (Figure 2).

Figure 2. Return or Drain Line They allow the sample to be taken before the oil returns to the tank and always before it goes through a filter. If the oil is permitted to return to the tank, then the information in the sample becomes diluted, potentially by thousands of gallons of fluid in large lubricating and hydraulic systems. In addition, debris in the reservoir tends to accumulate over weeks and months and may not accurately represent the current condition of the machine.

Live Zone Sampling from Circulating Systems


When a sample is taken from a line in a circulating system it is referred to as a live zone sample. There are things that can be done during the sampling process that improve the quality and effectiveness of live zone oil sampling. These include sampling from the systems turbulent zones where the fluid is moving and the oil is well mixed; sampling downstream of the equipment after it has completed its primary functions, such as lubricating a bearing or a gear or has passed through a hydraulic pump or actuator; sampling during typical working conditions, on the run and under normal applications; and, where required, employing secondary sampling locations to localize problems. Just as there are factors that can improve the quality of a sample, there are also other factors that can diminish a samples quality and thus should be avoided. For example, it is important not to sample from dead pipe legs, hose ends and standing pipes where the fluid isnt moving or circulating. Samples should not be collected after filters or separators or after an oil change, filter change or at some time when the fluid wouldnt represent typical conditions. Samples should not be taken when the machine is cold and hasnt been operating or has been idling. In addition, samples should not be taken from laminar flow zones where a lack of fluid turbulence occurs.

Sampling from Pressurized Lines


When samples need to be taken from pressurized feed lines leading to bearings, gears, compressors, pistons, etc., the sampling method is simpler. Figure 3 shows four different configurations.

Figure 3. Pressurized Lines Portable High-Pressure Tap Sampling. The uppermost configuration on Figure 3 is a highpressure zone where a ball valve or needle valve is installed and the outlet is fitted with a piece of stainless steel helical tubing. The purpose of the tubing is to reduce the pressure of the fluid to a safe level before it enters the sampling bottle. A similar effect can be achieved using a small, hand-held pressure reduction valve. Minimess Tap Sampling. This alternative requires installation of a minimess valve, preferably on an elbow. The sampling bottle has a tube fitted with a probe protruding from its cap. The probe attaches to the minimess port allowing the oil to flow into the bottle. There is a vent hole on the cap of the sampling bottle so that when the fluid enters the bottle the air can expel or exhaust from the vent hole. This particular sampling method requires lower pressures (less than 500 psi) for safety. Ball Valve Tap Sampling. This configuration requires the installation of a ball valve on an elbow. When sampling, the valve should be opened and adequately flushed. Extra flushing is required if the exit extension from the valve is uncapped. Once flushed, the sampling bottles cap is removed and a sample is collected from the flow stream before closing the valve. Care should be taken when removing the bottle cap to prevent the entry of contamination. This technique is not suitable for high- pressure applications.

Portable Minimess Tap Sampling. This option requires installing a minimess onto the female half of a standard quick-connect coupling. This assembly is portable. The male half of a quick-connect is permanently fitted to the pressure line of the machine at the desired sampling location. To sample, the portable female half of the quick-connect is screwed or snapped (depending on adapter type) onto the male piece affixed to the machine. As the adapter is threaded onto the minimess valve, a small spring loaded ball is depressed within the minimess valve allowing oil to flow through the valve and into the sample bottle. In many cases, these male quick-connect couplings are preexisting on the equipment. A helical coil or pressure reduction valve, previously described, should be used on high-pressure lines.

Sampling from Low-pressure Circulating Lines


Occasionally a drain line, feed line or return line is not sufficiently pressurized to take a sample. In such cases, sampling requires assistance from a vacuum pump equipped with a special adapter allowing it to attach momentarily to a valve, such as a minimess valve. With the adapter threaded onto the minimess valve, fluid can be drawn by vacuum into the bottle (Figure 4).

Figure 4. Drain Line Vacuum Sampling

Sampling Wet Sumps


Frequently, there are applications where a drain line or a return line cant be accessed or no such line exists; these are typically called wet sump systems. Examples of wet sump systems are diesel engines, circulating gearboxes and circulating compressors. In these applications, because there is no return line, fluid must be sampled from the pressurized supply line leading to the gearing and the bearings (Figure 5). The sample should be collected before the filter, if one exists.

Figure 5. Pressure or Feed Line The best place to sample engine crankcase oil is also just before the filter. The sampling valve should be installed between the pump and filter. This sample location is highly preferred over sampling from a drain port or using a vacuum pump and tube inserted down the dipstick port. Many newer model engines come with an appropriately located sample valve right on the filter manifold.

Figure 6. Off-line Sampling Another example of a wet sump involving circulation is shown in Figure 6 where there is a side loop that is often referred to as a kidney loop filter. This off-line circulating system provides an ideal location to install a sampling valve between the pump and filter. A ball valve or a minimess valve can be used so that the fluid under pressure flows easily into the sample bottle without disturbing the operating system or filtration system.

Sampling Noncirculating Systems


There are numerous examples where no forced circulation is provided and a sample must be taken from a systems sump or casing. This often must be done with in-service equipment on the run. Ring or collar bath-lubricated bearings and splash-lubricated gearboxes are common examples of these systems. All of these situations increase the challenge of obtaining a representative sample. The most basic method for sampling such sumps is to remove the drain plug from the bottom of the sump allowing fluid to flow into the sample bottle. For many reasons, this is not an ideal sampling method or location. Most important is the fact that bottom sediment, debris and particles (including water) enter the bottle in concentrations that are not representative of what is experienced near or around where the oil lubricates the machine. The drain plug sampling method should be avoided if at all possible. Drain port sampling can be greatly improved by using a short length of tubing, extending inward and up into the active moving zone of the sump. This ball valve and tube assembly shown in Figure 7 can, in many cases, be threaded into the drain port and can be easily removed to facilitate draining the oil. Ideally, the tip of the tube, where the oil sample is taken should be half way up the oil level, two inches in from the walls and at least two inches from the rotating elements within the sump.

Figure 7. Drain Port Tap Sampling A third option is called drain port vacuum sampling. With this method a minimess valve is installed as previously described, but instead of fluid passing into a sample bottle by gravity, it is assisted by a vacuum sampler. This is particularly helpful where the oil is viscous and difficult to sample through a narrow tube. Still another method for sampling a gearbox or bearing housing is to use a portable oil circulating system such as a filter cart. In this case, the filter cart is attached to the sump (Figure 8).

Figure 8. Portable Off-line Sampling Here the cart circulates the fluid off the bottom of the sump and back into the sump. To keep from cleaning the oil before sampling, the filters must be by-passed using a directional valve. The fluid should become homogenous when it is circulated for about 5 to 15 minutes, depending on the size of the unit, the amount of fluid in the unit, and the flow rate of the filter cart. Once sufficient mixing has occurred, a sample can be taken from the sampling valve (installed between the pump and the filter).

Drop-tube Vacuum Sampling


One of the most common methods for sampling a bath- or splash-lubricated wet sump is to use the drop-tube vacuum sample method. A tube is inserted through a fill port or dip stick port and lowered into the sump cavity, usually about midway into the oil level. This sampling method has a number of drawbacks and should be avoided if the sampling methods previously described can be used instead. Some of the primary risks and problems associated with drop-tube vacuum sampling are: Tube Location. A tube that is directed into the fill or dipstick port is extremely difficult to control. The tubes final resting place is hard to predict, resulting in samples being taken from different locations each time. There is also a risk of the tube actually going all the way to the bottom of the sump where debris and sediment are picked up. Drop Tube Contamination. There is considerable concern that when the tube is being inserted into the sump it will scoop up debris from the sides of the casing. Also, the tube itself may be contaminated due to poor cleanliness control when it was produced or while it was stored. Large Flush Volume. The drop-tube method substantially increases the volume of fluid that must be flushed to obtain a representative sample. For some small sump systems this practically results in an oil change. In addition, if the removed volume of fluid is not replaced, the machine might be restarted with inadequate lubricant volume. Particle Fallout. For most systems, a shutdown is required to deploy the drop-tube method. This means that production must be disturbed for the sake of oil sampling, or sampling

frequency must suffer because of production priorities. Neither situation is ideal. Futhermore, particles begin to settle and stratify according to size and density immediately upon shutdown, compromising the quality of oil analysis. Machine Intrusion. The drop-tube method is intrusive. The machine must be entered to draw a sample. This intrusion introduces the risk of contamination, and there is always the concern that the machine might not be properly restored to run-ready condition before startup. Whenever drop-tube sampling is used, it should be considered a sampling method of last resort. However, there are situations where no other practical method of sampling is available. In situations where drop-tube vacuum sampling must be used on circulating systems, the best sampling location is between the return line and the suction line (Figure 9). This is known as the short circuit.

Figure 9. Drop-tube Vacuum Sampling

Sampling Bottles and Hardware


An important factor in obtaining a representative sample is to make sure the sampling hardware is completely flushed prior to obtaining the sample. This is usually accomplished using a spare bottle to catch the purged fluid. It is important to flush five to 10 times the dead space volume before obtaining the sample. All hardware in which the oil comes into contact is considered dead space and must be flushed, including: System dead-legs Sampling ports, valves and adapters Probe on sampling devices Adapters for using vacuum sample extraction pumps Plastic tubing used for vacuum pumps (this tubing should not be reused to avoid cross-contamination between oils)

There is an assortment of sampling bottles that are commonly used in oil analysis. An appropriate bottle needs to be selected for the application and the test that is planned. Several features including size, material and cleanliness must be considered when selecting a sample bottle. A number of different-sized sampling bottles are available. They vary from 50 mL (or about two ounces of fluid) to a more common 100 to 120 mL bottle. The larger bottle is preferred when tests such as particle count and viscosity analysis are required. Where a considerable number of different tests are required, a 200 ml bottle (or two 100 ml bottles) may be required.

It is important to coordinate with the laboratory to select the bottle size that will provide a sufficient volume to conduct all the required tests and leave some extra for storage in case a rerun is necessary. Another consideration in selecting the bottle size is that the entire volume of the bottle should not be filled with fluid during the sampling process. Only a portion of the sampling bottle should be filled. The unfilled portion, called the ullage, is needed to allow proper fluid agitation by the laboratory to restore even distribution of suspended particles and water in the sample. The general guidelines for filling bottles are: Low Viscosity (ISO VG 32 or less) - Fill to about three-fourths of the total volume. Medium Viscosity (ISO VG 32 to ISO VG 100) - Fill to about two-thirds of the total volume. High Viscosity (over ISO VG 100) - Fill to about one-half of the total volume.

Bottles are available in several materials. Plastic polyethylene is one of the most common bottle materials. It is an opaque material similar to a plastic milk jug. This type of sampling bottle presents a drawback because the oil cant be visually examined after the sample is obtained. Important oil properties, such as sediment, darkness, brightness, clarity and color, can be immediately learned from a visual inspection. Another material is PET plastic. It is a completely clear, glass-like material and is available in standard-sized bottles. This plastic is found to be compatible with most types of lubricating oils and hydraulic fluids, including synthetics. Of course, glass bottles are also available. These bottles tend to be more expensive, are heavier, and there is the risk of breakage during the sampling process. One advantage with glass bottles is that they can be cleaned and used over and over. The cleanliness of glass bottles often exceeds that of plastic bottles. One of the most important considerations in selecting a sampling bottle is to make sure it is sufficiently clean. The bottles required cleanliness level should be determined in advance. (See the article titled Bottle Cleanliness: Is a New Standard Needed? in the March-April 2003 issue of Practicing Oil Analysis magazine for additional information on sample bottle cleanliness.)

Conclusion
All oil analysis tools, techniques and diagnostic processes are worthless if the oil sample fails to effectively represent the actual condition of the oil in service in the machine. Proper sampling procedures are the foundation of an effective oil analysis program. Without good sampling procedures, time and money are wasted, and incorrect conclusions based upon faulty data could be reached. To ensure that an oil analysis program is perceived as valuable and to boost confidence in the program, it is important to determine, understand and practice the processes that are necessary to obtain a representative oil sample. Editors Note This article is an abridged version of Chapter 4 from Oil Analysis Basics written by Drew Troyer and Jim Fitch and published by Noria Corporation. More information about the book can be obtained by visiting Noria Corporations online bookstore at www.noria.com/secure/.

Sidebar 1

Important Tips for Effective Oil Sampling


To achieve bulls-eye oil analysis data, where oil sampling and analysis produce the most representative and trendable information, follow these basic sampling tactics: 1) Machines should be running in application during sampling. That means samples should be collected when machines are at normal operating temperatures, loads, pressures and speeds on a typical day. If that is achieved, the data will be typical as well, which is exactly what is desired. 2) Always sample upstream of filters and downstream of machine components such as bearings, gears, pistons, cams, etc. This will ensure the data is rich in information. It also ensures that no data (such as particles) is being removed by filters or separators. 3) Create specific written procedures for each system sampled. This ensures that each sample is extracted in a consistent manner. Written procedures also help new team members quickly learn the program. 4) Ensure that sampling valves and sampling devices are thoroughly flushed prior to taking the sample. Vacuum samplers and probe-on samplers should be flushed too, and if there are any questions about the cleanliness of the bottle itself, it should also be flushed. 5) Make sure that samples are taken at proper frequencies and that the frequency is sufficient to identify common and important problems. Record the hours on the oil where possible, especially with crankcase and drive train samples. This can be a meter reading or some other record identifying the amount of time that the oil has been in the machine. If there has been any makeup fluid added or any change to the oil such as the addition of additives, a partial drain or anything similar, communicate this information to the lab. 6) Forward samples immediately to the oil analysis lab after sampling. The properties of the oil in the bottle and the oil in the machine begin to drift apart the moment after the sample is drawn. Quickly analyzing the sample ensures the highest quality and timely decisions. Sidebar 2

Corn Milling Plant Learns the Value of Proper Sampling


Under the guidance of Jim Smith of Allied Services Group, a corn milling plant in the southern United States started an oil analysis program in the fall of 2003. With a predominance of conveyors and other milling equipment, a significant number of the plants critical assets are large splash-lubricated gearboxes. In early fall, all the plants critical gearboxes were sampled. Because the equipment was not equipped for best practice oil sampling - though a sampling point survey was planned - there was no choice but to use the drop tube method to obtain the samples. Even though plant personnel understood this was not the best method for sampling, with no other option, they decided a baseline sample before making any changes was warranted. Fairly aggressive cleanliness targets of 18/16/13 for major gearboxes were set. Based on these targets, 28 samples from these gearboxes were returned as critical due in every case to high particle counts.

Immediately after the first baseline samples were taken, a sample point survey was conducted. Shortly thereafter, the reports recommendation of installing pitot tube style sample valves in all of the plants splash-lubricated gearboxes was implemented, in conjunction with a filtration program. At the prescribed time, these gearboxes were resampled, using the new sample valves, and submitted to the lab for analysis. Of the 28 boxes deemed initially to be critical, 22 of 28 were returned as normal. Editors Note The moral of this story is that if you want to get accurate data, particularly where particle counting is a required test, the use of appropriate sample valves is of paramount importance. To receive and act on an analysis report that indicates a critical problem but turns out to be nothing more than poor sampling, is the easiest way to erode confidence in any oil analysis program.

Potrebbero piacerti anche