Sei sulla pagina 1di 44

Metrology is 'the science of measurement'.

It covers both the theoretical and the practical issues


related to measurement. The aim of measurement is to determine the value of a quantity, for instance
length, temperature, time, electrical resistance or the amount of substance of a certain material. Several
basic elements need to be available to carry out an accurate measurement. First of all measurement
standards are needed, by which the measurement equipment to be used, can be calibrated. The
measurement has to proceed according to a specified protocol.

The assessment of (inter)nationally agreed measurement methods is the work of the Nederlandse
Normalisatie Instituut NEN. Measurements carried out by law occupy an special position.

Measurement Standards

For example the platinum-iridium kilogram or the Caesium 133 atomic clock .

Measuring instruments and their characteristics

Such as measurement systems in production and control processes.

Measurements

Including the methods of measurement, estimation of uncertainty and deviation, and environmental
effects.

Legal metrology

The part of metrology that relates to statutory requirements and legislation.


Calibration

Calibration involves determining the metrological characteristics of an instrument. This is achieved by


means of a direct comparison against standards. A calibration certificate is issued and (mostly) a sticker
attached. Based on this information a user can decide whether the instrument is fit for the application in
question.

Some more information can be obtained by clicking any of the items below:
measurement standard / etalon
international (measurement) standard
primary standard
reference standard
working standard

traceability
calibration
reference material
certified reference material

(measurement) standard / etalon


Material measure, measuring instrument, reference material or measuring system intended to define,
realize, conserve or reproduce a unit or one or more values of a quantity to serve as a reference.

Examples
a) 1 kg mass standard
b) 100 standard resistor
c) standard ammeter
d) cesium frequency standard

international (measurement) standard


Standard recognized by an international agreement to serve internationally as the basis for assigning
values to other standards of the quantity concerned.

primary standard
Standard that is designated or widely acknowledged as having the highest metrological qualities and
whose value is accepted without reference to other standards of the same quantity.

reference standard
Standard, generally having the highest metrological quality at a given location or in a given organization,
from which measurements made there are derived.

working standard
Standard that is used routinely to calibrate or check material measures, measuring instruments or
reference materials.
A working standard is usually calibrated against a reference standard.

traceability
Property of the result of a measurement or the value of a standard whereby it can be related to stated
references, usually national or international standards, through an unbroken chain of comparisons all
having stated uncertainties.

calibration
Set of operations that establish, under specified conditions, the relationship between values of
quantities indicated by a measuring instrument or measuring system, or values represented by a
material measure or a reference material, and the corresponding values realized by standards.

Notes
1. The result of a calibration permits either the assignment of values of measurands to the indications or
the determination of corrections with respect to indications
2. A calibration may also determine other metrological properties such as the effect of influence
quantities
3. The result of a calibration may be recorded in a document, sometimes called a calibration certificate
or a calibration report

reference material (RM)


Material or substance one or more of whose property values are sufficiently homogeneous and well
established to be used for the calibration of an apparatus, the assessment of a measurement method, or
for assigning values to materials.

Note
A reference material may be in the form of a pure or mixed gas, liquid or solid. Examples are water for
the calibration of viscometers, sapphire as a heat-capacity calibrant in calorimetry, and solutions used
for calibration in chemical analyses.

Certified reference material (CRM)


Reference material, accompanied by a certificate, one or more of whose property values are certified by
a procedure which establishes traceability to an accurate realization of the unit in which the property
values are expressed, and for which each certified value is accompanied by an uncertainty at a stated
level of confidence.
Calibration Terminology

The field of calibration has a huge vocabulary describing the methods and processes used to verify the
measurement accuracy of masters, gages and other measuring instruments. The following definitions
are for the most commonly used terms.
Calibration

A2LA are the initials of American Association for Laboratory Accreditation, a non-profit accrediting
agency specializing in the accreditation of calibration and testing laboratories.

Accreditation is a process used by a qualified independent agency to verify the quality system and
technical capability of a calibration laboratory to a recognized standard such as ISO 17025.

Accuracy defines how close a measured value is to the true value of the dimension.

Calibration is the set of operations which establish, under specified conditions, the relationship between
values of quantities indicated by a measuring instrument or measuring system, or values represented by
a material measure or a reference material and the corresponding values realized by standards.

Calibration Certificate or Report is the document that presents calibration results and other information
relevant to a calibration.

Calibration Frequency is the time intervals at which instruments, gages and masters are calibrated.
These intervals are determined by their user based on the conditions of their use to ensure their
performance or size remain within acceptable limits.

Calibration Limits is a tolerance applied to gages and instruments beyond which they are not considered
suitable for use.

International (Measurement) Standard is a standard recognized by an international agreement to serve


internationally as the basis for fixing the value of all other standard of the quantity concerned.

Limits of Permissible Error (of a measuring instrument) are the extreme values of an error permitted by
specifications, regulations, etc. for a given measuring instrument.

Measurement Assurance is the technique that may include, but is not limited to: 1) use of good
experimental design principles so the entire measurement process, its components, and relevant
influence factors can be well characterized, monitored and controlled; 2) complete experimental
characterization of the measurement process uncertainty including statistical variations, contributions
from all known or suspected influence factors, imported uncertainties, and the propagation of
uncertainties throughout the measurement process; and 3) continuously monitoring the performance
and state of statistical control of the measurement process with proven statistical process control
techniques including the measurement of well characterized check standards along with the normal
workload and the use of appropriate control charts.

Measuring and Test Equipment includes all of the measuring instruments, measurement standards,
reference materials, and auxiliary apparatus that are necessary to perform a measurement. This term
includes measuring equipment used in the course of testing and inspection, as well as that used in
calibration.
Quality System is the organizational structure, responsibilities, procedures, processes and resources for
implementing quality management.

Resolution represents the smallest reading unit provided by an instrument.

Traceability is the path by which a measurement can be traced back to the source from which it is
derived, such as NIST in the United States. Direct traceability implies that the laboratory has its primary
masters calibrated directly by such an agency for reduced measurement uncertainty.

Uncertainty of Measurement is a parameter associated with the result of a measurement that


characterizes the dispersion of the values that could reasonably be attributed to the measurand.

Basic Terminology

Accuracy - how close a measurement reading is to the 'true' value of the parameter being measured

Precision - how repeatable or closely-grouped the measurement readings are

Resolution - the level of discrimination that the measuring equipment can show; the smallest unit
change that it can discern or detect

Sensitivity - the smallest change in the input (stimulus) that causes a discernible change in the output

Stability - the tendency of a measuring equipment not to 'drift' or degrade over time and usage

Gauge Repeatability and Reproducibility, or GR&R, is a measure of the capability of a gauge or gage to
obtain the same measurement reading every time the measurement process is undertaken for the same
characteristic or parameter. In other words, GR&R indicates the consistency and stability of a measuring
equipment. The ability of a measuring device to provide consistent measurement data is important in
the control of any process.
Mathematically, GR&R is actually a measure of the variation of a gage's measurement, and not of its
stability. An engineer must therefore strive to minimize the GR&R numbers of his or her measuring
equipment, since a high GR&R number indicates instability and is thus undesirable.
As its name implies, GR&R (or simply 'R&R') has two major components, namely, repeatability and
reproducibility. Repeatability is the ability of the same gage to give consistent measurement readings no
matter how many times the same operator of the gage repeats the measurement process.
Reproducibility, on the other hand, is the ability of the same gage to give consistent measurement
readings regardless of who performs the measurements. The evaluation of a gage's reproducibility,
therefore, requires measurement readings to be acquired by different operators under the same
conditions.
Of course, in the real world, there are no existing gages or measuring devices that give exactly the same
measurement readings all the time for the same parameter. There are five (5) major elements of a
measurement system, all of which contribute to the variability of a measurement process: 1) the
standard; 2) the workpiece; 3) the instrument; 4) the people; and 5) the environment.
All of these factors affect the measurement reading acquired during each measurement cycle, although
to varying degrees. Measurement errors, therefore, can only be minimized if the errors or variations
contributed individually by each of these factors can also be minimized. Still, the gage is at the center of
any measurement process, so its proper design and usage must be ensured to optimize its repeatability
and reproducibility.
There are various ways by which the R&R of an instrument may be assessed, one of which is outlined
below. This method, which is based on the method recommended by the Automotive Industry Action
Group (AIAG), first computes for variations due to the measuring equipment and its operators. The over-
all GR&R is then computed from these component variations.
Equipment Variation, or EV, represents the repeatability of the measurement process. It is calculated
from measurement data obtained by the same operator from several cycles of measurements, or trials,
using the same equipment. Appraiser Variation or AV, represents the reproducibility of the
measurement process. It is calculated from measurement data obtained by different operators or
appraisers using the same equipment under the same conditions. The R&R, is just the combined effect
of EV and AV.

It must be noted that measurement variations are caused not just by EV and AV, but by Part Variation as
well, or PV. PV represents the effect of the variation of parts being measured on the measurement
process, and is calculated from measurement data obtained from several parts.
Thus, the Total Variation (TV), or the over-all variation exhibited by the measurement system, consists of
the effects of both R&R and PV. TV is equal to the square root of the sum of (R&R)2 and (PV)2 square,
i.e.,
TV = (R&R)2 + PV2.
In a GR&R report, the final results are often expressed as %EV, %AV, %R&R, and %PV, which are simply
the ratios of EV, AV, R&R, and PV to TV expressed in %. Thus, %EV=(EV/TV)x100%; %AV=(AV/TV)x100%;
%R&R=(R&R/TV)x100%; and %PV=(PV/TV)x100%. The gage is good if its %R&R is less than 10%. A %R&R
between 10% to 30% may also be acceptable, depending on what it would take to improve the R&R. A
%R&R of more than 30%, however, should prompt the process owner to investigate how the R&R of the
gage can be further improved.
Accuracy and precision

In the fields of science, engineering, industry, and statistics, the accuracy[1] of a measurement system is
the degree of closeness of measurements of a quantity to that quantity's actual (true) value. The
precision[1] of a measurement system, also called reproducibility or repeatability, is the degree to which
repeated measurements under unchanged conditions show the same results.[2] Although the two
words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in
the context of the scientific method.

A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For
example, if an experiment contains a systematic error, then increasing the sample size generally
increases precision but does not improve accuracy. The result would be a consistent yet inaccurate
string of results from the flawed experiment. Eliminating the systematic error improves accuracy but
does not change precision.

Accuracy is the proximity of measurement results to the true value;


precision, the repeatability, or reproducibility of the measurement

A measurement system is considered valid if it is both accurate and precise. Related terms include bias
(non-random or directed effects caused by a factor or factors unrelated to the independent variable)
and error (random variability).

The terminology is also applied to indirect measurementsthat is, values obtained by a computational
procedure from observed data.

In addition to accuracy and precision, measurements may also have a measurement resolution, which is
the smallest change in the underlying physical quantity that produces a response in the measurement.

In numerical analysis, accuracy is also the nearness of a calculation to the true value; while precision is
the resolution of the representation, typically defined by the number of decimal or binary digits.
Contents [hide]
1 Quantification
2 Terminology of ISO 5725

3 In binary classification
4 In psychometrics and psychophysics
5 In logic simulation
6 In information systems
7 See also
8 References
9 External links

WHAT IS THE DIFFERENCE BETWEEN


ACCURACY AND PRECISION?

METEOROLOGIST JEFF HABY

Accuracy is defined as, "The ability of a measurement to match the actual value of the quantity being
measured". If in reality it is 34.0 F outside and a temperature sensor reads 34.0 F, then than sensor is
accurate.

Precision is defined as, "(1) The ability of a measurement to be consistently reproduced" and "(2) The
number of significant digits to which a value has been reliably measured". If on several tests the
temperature sensor matches the actual temperature while the actual temperature is held constant,
then the temperature sensor is precise. By the second definition, the number 3.1415 is more precise
than the number 3.14

An example of a sensor with BAD accuracy and BAD precision: Suppose a lab refrigerator holds a
constant temperature of 38.0 F. A temperature sensor is tested 10 times in the refrigerator. The
temperatures from the test yield the temperatures of: 39.4, 38.1, 39.3, 37.5, 38.3, 39.1, 37.1, 37.8, 38.8,
39.0. This distribution shows no tendency toward a particular value (lack of precision) and does not
acceptably match the actual temperature (lack of accuracy).

An example of a sensor with GOOD accuracy and BAD precision: Suppose a lab refrigerator holds a
constant temperature of 38.0 F. A temperature sensor is tested 10 times in the refrigerator. The
temperatures from the test yield the temperatures of: 37.8, 38.3, 38.1, 38.0, 37.6, 38.2, 38.0, 38.0, 37.4,
38.3. This distribution shows no impressive tendency toward a particular value (lack of precision) but
each value does come close to the actual temperature (high accuracy).

An example of a sensor with BAD accuracy and GOOD precision: Suppose a lab refrigerator holds a
constant temperature of 38.0 F. A temperature sensor is tested 10 times in the refrigerator. The
temperatures from the test yield the temperatures of : 39.2, 39.3, 39.1, 39.0, 39.1, 39.3, 39.2, 39.1, 39.2,
39.2. This distribution does show a tendency toward a particular value (high precision) but every
measurement is well off from the actual temperature (low accuracy).

An example of a sensor with GOOD accuracy and GOOD precision: Suppose a lab refrigerator holds a
constant temperature of 38.0 F. A temperature sensor is tested 10 times in the refrigerator. The
temperatures from the test yield the temperatures of: 38.0, 38.0, 37.8, 38.1, 38.0, 37.9, 38.0, 38.2, 38.0,
37.9. This distribution does show a tendency toward a particular value (high precision) and is very near
the actual temperature each time (high accuracy).

The goal of any meteorological instrument is to have high accuracy (sensor matching reality as close as
possible) and to also have a high precision (being able to consistently replicate results and to measure
with as many significant digits as appropriately possible). Meteorological instruments, including radar,
need to be calibrated in order that they sustain high accuracy and high precision.
Quantification[edit]
See also: False precision

In industrial instrumentation: Accuracy is the measurement tolerance, or transmission of the instrument


and defines the limits of the errors made when the instrument is used in normal operating conditions,
according to the book of industrial instrumentation Antonio Creus.

Ideally a measurement device is both accurate and precise, with measurements all close to and tightly
clustered around the known value. The accuracy and precision of a measurement process is usually
established by repeatedly measuring some traceable reference standard. Such standards are defined in
the International System of Units (abbreviated SI from French: Systme international d'units) and
maintained by national standards organizations such as the National Institute of Standards and
Technology in the United States.

This also applies when measurements are repeated and averaged. In that case, the term standard error
is properly applied: the precision of the average is equal to the known standard deviation of the process
divided by the square root of the number of measurements averaged. Further, the central limit theorem
shows that the probability distribution of the averaged measurements will be closer to a normal
distribution than that of individual measurements.

With regard to accuracy we can distinguish:


the difference between the mean of the measurements and the reference value, the bias. Establishing
and correcting for bias is necessary for calibration.
the combined effect of that and precision.

A common convention in science and engineering is to express accuracy and/or precision implicitly by
means of significant figures. Here, when not explicitly stated, the margin of error is understood to be
one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0
m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of
8,436 m would imply a margin of error of 0.5 m (the last significant digits are the units).

A reading of 8,000 m, with trailing zeroes and no decimal point, is ambiguous; the trailing zeroes may or
may not be intended as significant figures. To avoid this ambiguity, the number could be represented in
scientific notation: 8.0 103 m indicates that the first zero is significant (hence a margin of 50 m) while
8.000 103 m indicates that all three zeroes are significant, giving a margin of 0.5 m. Similarly, it is
possible to use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 103 m. In fact, it
indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to false precision
errors when accepting data from sources that do not obey it.

Precision is sometimes stratified into:


Repeatability the variation arising when all efforts are made to keep conditions constant by using the
same instrument and operator, and repeating during a short time period; and
Reproducibility the variation arising using the same measurement process among different
instruments and operators, and over longer time periods.
Terminology of ISO 5725[edit]

According to ISO 5725-1, Accuracy consists of Trueness (proximity of measurement results to the true
value) and Precision (repeatability or reproducibility of the measurement)
A shift in the meaning of these terms appeared with the publication of the ISO 5725 series of standards,
which is also reflected in the 2008 issue of the "BIPM International Vocabulary of Metrology" (VIM),
items 2.13 and 2.14. [1]

According to ISO 5725-1,[3] the terms trueness and precision are used to describe the accuracy of a
measurement. Trueness refers to the closeness of the mean of the measurement results to the actual
(true) value and precision refers to the closeness of agreement within individual results. Therefore,
according to the ISO standard, the term "accuracy" refers to both trueness and precision.

ISO 5725-1 also avoids the use of the term bias, because it has different connotations outside the fields
of science and engineering, as in medicine and law.Accuracy according to BIPM and ISO 5725

Low accuracy, good trueness, poor precision

Low accuracy, poor trueness, good precision

That is, the accuracy is the proportion of true results (both true positives and true negatives) in the
population. It is a parameter of the test.
On the other hand, precision or positive predictive value is defined as the proportion of the true
positives against all the positive results (both true positives and false positives)

An accuracy of 100% means that the measured values are exactly the same as the given values.

Also see Sensitivity and specificity.

Accuracy may be determined from Sensitivity and Specificity, provided Prevalence is known, using the
equation:

The accuracy paradox for predictive analytics states that predictive models with a given level of accuracy
may have greater predictive power than models with higher accuracy. It may be better to avoid the
accuracy metric in favor of other metrics such as precision and recall.[citation needed] In situations
where the minority class is more important, F-measure may be more appropriate, especially in
situations with very skewed class imbalance.

Another useful performance measure is the balanced accuracy which avoids inflated performance
estimates on imbalanced datasets. It is defined as the arithmetic mean of sensitivity and specificity, or
the average accuracy obtained on either class:

If the classifier performs equally well on either class, this term reduces to the conventional accuracy
(i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the
conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test
set, then the balanced accuracy, as appropriate, will drop to chance.[4] A closely related chance
corrected measure is:
[5]

A direct approach to debiasing and renormalizing Accuracy is Cohen's kappa, whilst Informedness has
been shown to be a Kappa-family debiased renormalization of Recall.[6] Informedness and Kappa have
the advantage that chance level is defined to be 0, and they have the form of a probability.
Informedness has the stronger property that it is the probability that an informed decision is made
(rather than a guess), when positive. When negative this is still true for the absolutely value of
Informedness, but the information has been used to force an incorrect response.[5]
In psychometrics and psychophysics[edit]
In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and
constant error. Precision is a synonym for reliability and variable error. The validity of a measurement
instrument or psychological test is established through experiment or correlation with behavior.
Reliability is established with a variety of statistical techniques, classically through an internal
consistency test like Cronbach's alpha to ensure sets of related questions have related responses, and
then comparison of those related question between reference and target population.[citation needed]
In logic simulation[edit]

In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation
model to a transistor circuit simulation model. This is a comparison of differences in precision, not
accuracy. Precision is measured with respect to detail and accuracy is measured with respect to
reality.[7][8]
In information systems[edit] This article may be confusing or unclear to readers. Please help us clarify
the article; suggestions may be found on the talk page. (March 2013)

The concepts of accuracy and precision have also been studied in the context of data bases, information
systems and their sociotechnical context. The necessary extension of these two concepts on the basis of
theory of science suggests that they (as well as data quality and information quality) should be centered
on accuracy defined as the closeness to the true value seen as the degree of agreement of readings or of
calculated values of one same conceived entity, measured or calculated by different methods, in the
context of maximum possible disagreement.[9]
Further information: Precision and recall
See also[edit]
Accepted and experimental value
Engineering tolerance
Exactness
Experimental uncertainty analysis
F-score
Precision (statistics)
Sensitivity and specificity
Statistical significance
Significant figures
Probability
Measurement uncertainty
Examples of Precision and Accuracy:
Low Accuracy High Accuracy High Accuracy
High Precision Low Precision High Precision

Accuracy
Accuracy is how close a measured value is to the actual (true) value.

Precision
Precision is how close the measured values are to each other.
Degree of Accuracy

Accuracy depends on the instrument you are measuring with.

But as a general rule:


The degree of accuracy is half a unit each side of the unit of measure

If your instrument measures in "1"s


then any value between 6 and 7 is measured as "7"

If your instrument measures in "2"s


then any value between 7 and 9 is measured as "8"

Accuracy and Precision:

Accuracy refers to the closeness of a measured value to a standard or known value. For example, if in
lab you obtain a weight measurement of 3.2 kg for a given substance, but the actual or known weight
is 10 kg, then your measurement is not accurate. In this case, your measurement is not close to the
known value.

Precision refers to the closeness of two or more measurements to each other. Using the example
above, if you weigh a given substance five times, and get 3.2 kg each time, then your measurement is
very precise. Precision is independent of accuracy. You can be very precise but inaccurate, as
described above. You can also be accurate but imprecise.
For example, if on average, your measurements for a given substance are close to the known value,
but the measurements are far from each other, then you have accuracy without precision.

A good analogy for understanding accuracy and precision is to imagine a basketball player shooting
baskets. If the player shoots with accuracy, his aim will always take the ball close to or into the
basket. If the player shoots with precision, his aim will always take the ball to the same location which
may or may not be close to the basket. A good player will be both accurate and precise by shooting
the ball the same way each time and each time making it in the basket.

Accuracy and precision are used in context of measurement. Accuracy is the degree of conformity of a
measured or calculated quantity to its actual (true) value, while precision is the degree to which
further measurements or calculations show the same or similar results. In other words, the precision
of an experiment/object/value is a measure of the reliability of the experiment, or how reproducible
the experiment is. The accuracy of an experiment/object/value is a measure of how closely the
experimental results agree with a true or accepted value.

Both accuracy and precision are terms used in the fields of science, engineering and statistics.

Accuracy Precision

The degree to which an instrument or


Definition The degree of closeness to true value.
process will repeat the same value.

Multiple measurements or factors are


Measurements Single factor or measurement
needed

A term used in measuring a process or A term used in measuring a process or


About
device. device.

Physics, chemistry, engineering, statistics Physics, chemistry, engineering,


Uses
and so on. statistics and so on.

Accuracy and precision are important concepts, as they relate to any experimental measurement that
you would make.

Accuracy

Accuracy refers to the agreement between experimental data and a known value. You can think of it
in terms of a bullseye in which the target is hit close to the center, yet the marks in the target aren't
necessarily close to each other.

Precision
Precision refers to how well experimental values agree with each other. If you hit a bullseye precisely,
then you are able to hit the same spot on the target each time, even though that spot may be distant
from the center.

Calibration

Calibration is a comparison between measurements one of known magnitude or correctness made


or set with one device and another measurement made in as similar a way as possible with a second
device.

The device with the known or assigned correctness is called the standard. The second device is the
unit under test, test instrument, or any of several other names for the device being calibrated.

The formal definition of calibration by the International Bureau of Weights and Measures is the
following: "Operation that, under specified conditions, in a first step, establishes a relation between
the quantity values with measurement uncertainties provided by measurement standards and
corresponding indications with associated measurement uncertainties (of the calibrated instrument or
secondary standard) and, in a second step, uses this information to establish a relation for obtaining a
measurement result from an indication."[1]
Contents [hide]
1 History
2 Basic calibration process
2.1 Calibration process success factors
3 Quality
4 Instrument calibration
5 International
6 See also
7 References
8 External links

History[edit]

The words "calibrate" and "calibration" entered the English language during the American Civil
War,[2] in descriptions of artillery. Many of the earliest measuring devices were intuitive and easy to
conceptually validate. The term "calibration" probably was first associated with the precise division of
linear distance and angles using a dividing engine and the measurement of gravitational mass using a
weighing scale. These two forms of measurement alone and their direct derivatives supported nearly
all commerce and technology development from the earliest civilizations until about AD 1800.

The Industrial Revolution introduced wide scale use of indirect measurement. The measurement of
pressure was an early example of how indirect measurement was added to the existing direct
measurement of the same phenomena.
Direct reading design
Indirect reading design from front

Indirect reading design from rear, showing Bourdon tube

Before the Industrial Revolution, the most common pressure measurement device was a hydrostatic
manometer, which is not practical for measuring high pressures. Eugene Bourdon fulfilled the need
for high pressure measurement with his Bourdon tube pressure gage.

In the direct reading hydrostatic manometer design on the left, an unknown applied pressure Pa
pushes the liquid down the right side of the manometer U-tube, while a length scale next to the tube
measures the pressure, referenced to the other, open end of the manometer on the left side of the U-
tube (P0). The resulting height difference "H" is a direct measurement of the pressure or vacuum with
respect to atmospheric pressure. The absence of pressure or vacuum would make H=0. The self-
applied calibration would only require the length scale to be set to zero at that same point.

This direct measurement of pressure as a height difference depends on the density of the manometer
fluid, and a calibrated means of measuring the height difference.

In a Bourdon tube shown in the two views on the right, applied pressure entering from the bottom on
the silver barbed pipe tries to straighten a curved tube (or vacuum tries to curl the tube to a greater
extent), moving the free end of the tube that is mechanically connected to the pointer. This is indirect
measurement that depends on calibration to read pressure or vacuum correctly. No self-calibration is
possible, but generally the zero pressure state is correctable by the user.

Even in recent times, direct measurement is used to increase confidence in the validity of the
measurements.

In the early days of US automobile use, people wanted to see the gasoline they were about to buy in a
big glass pitcher, a direct measure of volume and quality via appearance. By 1930, rotary flowmeters
were accepted as indirect substitutes. A hemispheric viewing window allowed consumers to see the
blade of the flowmeter turn as the gasoline was pumped. By 1970, the windows were gone and the
measurement was totally indirect.

Indirect measurement always involve linkages or conversions of some kind. It is seldom possible to
intuitively monitor the measurement. These facts intensify the need for calibration.

Most measurement techniques used today are indirect.

Basic calibration process[edit]

The calibration process begins with the design of the measuring instrument that needs to be
calibrated. The design has to be able to "hold a calibration" through its calibration interval. In other
words, the design has to be capable of measurements that are "within engineering tolerance" when
used within the stated environmental conditions over some reasonable period of time. Having a
design with these characteristics increases the likelihood of the actual measuring instruments
performing as expected.

The exact mechanism for assigning tolerance values varies by country and industry type. The
measuring equipment manufacturer generally assigns the measurement tolerance, suggests a
calibration interval and specifies the environmental range of use and storage. The using organization
generally assigns the actual calibration interval, which is dependent on this specific measuring
equipment's likely usage level. A very common interval in the United States for 812 hours of use 5
days per week is six months. That same instrument in 24/7 usage would generally get a shorter
interval. The assignment of calibration intervals can be a formal process based on the results of
previous calibrations.

Calibration Target of the "Mars Hand Lens Imager (MAHLI)" (September 9, 2012) (3-D image).

The next step is defining the calibration process. The selection of a standard or standards is the most
visible part of the calibration process. Ideally, the standard has less than 1/4 of the measurement
uncertainty of the device being calibrated. When this goal is met, the accumulated measurement
uncertainty of all of the standards involved is considered to be insignificant when the final
measurement is also made with the 4:1 ratio. This ratio was probably first formalized in Handbook 52
that accompanied MIL-STD-45662A, an early US Department of Defense metrology program
specification. It was 10:1 from its inception in the 1950s until the 1970s, when advancing technology
made 10:1 impossible for most electronic measurements.

Maintaining a 4:1 accuracy ratio with modern equipment is difficult. The test equipment being
calibrated can be just as accurate as the working standard. If the accuracy ratio is less than 4:1, then
the calibration tolerance can be reduced to compensate. When 1:1 is reached, only an exact match
between the standard and the device being calibrated is a completely correct calibration. Another
common method for dealing with this capability mismatch is to reduce the accuracy of the device
being calibrated.

For example, a gage with 3% manufacturer-stated accuracy can be changed to 4% so that a 1%


accuracy standard can be used at 4:1. If the gage is used in an application requiring 16% accuracy,
having the gage accuracy reduced to 4% will not affect the accuracy of the final measurements. This is
called a limited calibration. But if the final measurement requires 10% accuracy, then the 3% gage
never can be better than 3.3:1. Then perhaps adjusting the calibration tolerance for the gage would be
a better solution. If the calibration is performed at 100 units, the 1% standard would actually be
anywhere between 99 and 101 units. The acceptable values of calibrations where the test equipment
is at the 4:1 ratio would be 96 to 104 units, inclusive. Changing the acceptable range to 97 to 103 units
would remove the potential contribution of all of the standards and preserve a 3.3:1 ratio. Continuing,
a further change to the acceptable range to 98 to 102 restores more than a 4:1 final ratio.

This is a simplified example. The mathematics of the example can be challenged. It is important that
whatever thinking guided this process in an actual calibration be recorded and accessible. Informality
contributes to tolerance stacks and other difficult to diagnose post calibration problems.

Also in the example above, ideally the calibration value of 100 units would be the best point in the
gage's range to perform a single-point calibration. It may be the manufacturer's recommendation or it
may be the way similar devices are already being calibrated. Multiple point calibrations are also used.
Depending on the device, a zero unit state, the absence of the phenomenon being measured, may
also be a calibration point. Or zero may be resettable by the user-there are several variations possible.
Again, the points to use during calibration should be recorded.

There may be specific connection techniques between the standard and the device being calibrated
that may influence the calibration. For example, in electronic calibrations involving analog
phenomena, the impedance of the cable connections can directly influence the result.

All of the information above is collected in a calibration procedure, which is a specific test method.
These procedures capture all of the steps needed to perform a successful calibration. The
manufacturer may provide one or the organization may prepare one that also captures all of the
organization's other requirements. There are clearinghouses for calibration procedures such as the
Government-Industry Data Exchange Program (GIDEP) in the United States.

This exact process is repeated for each of the standards used until transfer standards, certified
reference materials and/or natural physical constants, the measurement standards with the least
uncertainty in the laboratory, are reached. This establishes the traceability of the calibration.
See Metrology for other factors that are considered during calibration process development.

After all of this, individual instruments of the specific type discussed above can finally be calibrated.
The process generally begins with a basic damage check. Some organizations such as nuclear power
plants collect "as-found" calibration data before any routine maintenance is performed. After routine
maintenance and deficiencies detected during calibration are addressed, an "as-left" calibration is
performed.

More commonly, a calibration technician is entrusted with the entire process and signs the calibration
certificate, which documents the completion of a successful calibration.

Calibration process success factors[edit]

The basic process outlined above is a difficult and expensive challenge. The cost for ordinary
equipment support is generally about 10% of the original purchase price on a yearly basis, as a
commonly accepted rule-of-thumb. Exotic devices such as scanning electron microscopes, gas
chromatograph systems and laser interferometer devices can be even more costly to maintain.

The extent of the calibration program exposes the core beliefs of the organization involved. The
integrity of organization-wide calibration is easily compromised. Once this happens, the links between
scientific theory, engineering practice and mass production that measurement provides can be
missing from the start on new work or eventually lost on old work.

The 'single measurement' device used in the basic calibration process description above does exist.
But, depending on the organization, the majority of the devices that need calibration can have several
ranges and many functionalities in a single instrument. A good example is a common modern
oscilloscope. There easily could be 200,000 combinations of settings to completely calibrate and
limitations on how much of an all inclusive calibration can be automated.

Every organization using oscilloscopes has a wide variety of calibration approaches open to them. If a
quality assurance program is in force, customers and program compliance efforts can also directly
influence the calibration approach. Most oscilloscopes are capital assets that increase the value of the
organization, in addition to the value of the measurements they make. The individual oscilloscopes
are subject to depreciation for tax purposes over 3, 5, 10 years or some other period in countries with
complex tax codes. The tax treatment of maintenance activity on those assets can bias calibration
decisions.

New oscilloscopes are supported by their manufacturers for at least five years, in general. The
manufacturers can provide calibration services directly or through agents entrusted with the details of
the calibration and adjustment processes.

Very few organizations have only one oscilloscope. Generally, they are either absent or present in
large groups. Older devices can be reserved for less demanding uses and get a limited calibration or
no calibration at all. In production applications, oscilloscopes can be put in racks used only for one
specific purpose. The calibration of that specific scope only has to address that purpose.
This whole process in repeated for each of the basic instrument types present in the organization,
such as the digital multimeter pictured below.

A digital multimeter (top), a rack-mounted oscilloscope (center) and control panel

Also the picture above shows the extent of the integration between Quality Assurance and
calibration. The small horizontal unbroken paper seals connecting each instrument to the rack prove
that the instrument has not been removed since it was last calibrated. These seals are also used to
prevent undetected access to the adjustments of the instrument. There also are labels showing the
date of the last calibration and when the calibration interval dictates when the next one is needed.
Some organizations also assign unique identification to each instrument to standardize the record
keeping and keep track of accessories that are integral to a specific calibration condition.

When the instruments being calibrated are integrated with computers, the integrated computer
programs and any calibration corrections are also under control.
Quality[edit]

To improve the quality of the calibration and have the results accepted by outside organizations it is
desirable for the calibration and subsequent measurements to be "traceable" to the internationally
defined measurement units. Establishing traceability is accomplished by a formal comparison to a
standard which is directly or indirectly related to national standards ( such as NIST in the USA),
international standards, or certified reference materials. This may be done by national standards
laboratories operated by the government or by private firms offering metrology services.

Quality management systems call for an effective metrology system which includes formal, periodic,
and documented calibration of all measuring instruments. ISO 9000[3] and ISO 17025[4] standards
require that these traceable actions are to a high level and set out how they can be quantified.
Instrument calibration[edit]

Calibration may be called for:


a new instrument
after an instrument has been repaired or modified
when a specified time period has elapsed
when a specified usage (operating hours) has elapsed
before and/or after a critical measurement
after an event, for example
after an instrument has had a shock, vibration, or has been exposed to an adverse condition which
potentially may have put it out of calibration or damage it
sudden changes in weather
whenever observations appear questionable or instrument indications do not match the output of
surrogate instruments
as specified by a requirement, e.g., customer specification, instrument manufacturer
recommendation.

In general use, calibration is often regarded as including the process of adjusting the output or
indication on a measurement instrument to agree with value of the applied standard, within a
specified accuracy. For example, a thermometer could be calibrated so the error of indication or the
correction is determined, and adjusted (e.g. via calibration constants) so that it shows the true
temperature in Celsius at specific points on the scale. This is the perception of the instrument's end-
user. However, very few instruments can be adjusted to exactly match the standards they are
compared to. For the vast majority of calibrations, the calibration process is actually the comparison
of an unknown to a known and recording the results.
International[edit]

In many countries a National Metrology Institute (NMI) will exist which will maintain primary
standards of measurement (the main SI units plus a number of derived units) which will be used to
provide traceability to customer's instruments by calibration. The NMI supports the metrological
infrastructure in that country (and often others) by establishing an unbroken chain, from the top level
of standards to an instrument used for measurement. Examples of National Metrology Institutes are
NPL in the UK, NIST in the United States, PTB in Germany and many others. Since the Mutual
Recognition Agreement was signed it is now straightforward to take traceability from any
participating NMI and it is no longer necessary for a company to obtain traceability for measurements
from the NMI of the country in which it is situated.

To communicate the quality of a calibration the calibration value is often accompanied by a traceable
uncertainty statement to a stated confidence level. This is evaluated through careful uncertainty
analysis. Some times a DFS(Departure From Spec) is required to operate machinery in a degraded
state. Whenever this does happen, it must be in writing and authorized by a manager with the
technical assistance of a calibration technician.
See also[edit]
Calibration curve
Calibrated geometry
Calibration (statistics)
Color calibration used to calibrate a computer monitor or display.
Deadweight tester
Measurement Microphone Calibration
Measurement uncertainty
Musical tuning tuning, in music, means calibrating musical instruments into playing the right pitch.
Precision measurement equipment laboratory
Scale test car a device used to calibrate weighing scales that weigh railroad cars.
Systems of measurement
Calibration curve

A calibration curve plot showing limit of detection (LOD), limit of quantification (LOQ), dynamic
range, and limit of linearity (LOL).

In analytical chemistry, a calibration curve is a general method for determining the concentration of a
substance in an unknown sample by comparing the unknown to a set of standard samples of known
concentration.[1] A calibration curve is one approach to the problem of instrument calibration; other
approaches may mix the standard into the unknown, giving an internal standard.

The calibration curve is a plot of how the instrumental response, the so-called analytical signal,
changes with the concentration of the analyte (the substance to be measured). The operator prepares
a series of standards across a range of concentrations near the expected concentration of analyte in
the unknown. The concentrations of the standards must lie within the working range of the technique
(instrumentation) they are using.[2] Analyzing each of these standards using the chosen technique will
produce a series of measurements. For most analyses a plot of instrument response vs. analyte
concentration will show a linear relationship. The operator can measure the response of the unknown
and, using the calibration curve, can interpolate to find the concentration of analyte.

In more general use, a calibration curve is a curve or table for a measuring instrument which measures
some parameter indirectly, giving values for the desired quantity as a function of values of sensor
output. For example, a calibration curve can be made for a particular pressure transducer to
determine applied pressure from transducer output (a voltage).[3] Such a curve is typically used when
an instrument uses a sensor whose calibration varies from one sample to another, or changes with
time or use; if sensor output is consistent the instrument would be marked directly in terms of the
measured unit.

The data - the concentrations of the analyte and the instrument response for each standard - can be
fit to a straight line, using linear regression analysis. This yields a model described by the equation y =
mx + y0, where y is the instrument response, m represents the sensitivity, and y0 is a constant that
describes the background. The analyte concentration (x) of unknown samples may be calculated from
this equation.
Many different variables can be used as the analytical signal. For instance, chromium (III) might be
measured using a chemiluminescence method, in an instrument that contains a photomultiplier tube
(PMT) as the detector. The detector converts the light produced by the sample into a voltage, which
increases with intensity of light. The amount of light measured is the analytical signal.

Most analytical techniques use a calibration curve. There are a number of advantages to this
approach. First, the calibration curve provides a reliable way to calculate the uncertainty of the
concentration calculated from the calibration curve (using the statistics of the least squares line fit to
the data).[4]

Second, the calibration curve provides data on an empirical relationship. The mechanism for the
instrument's response to the analyte may be predicted or understood according to some theoretical
model, but most such models have limited value for real samples. (Instrumental response is usually
highly dependent on the condition of the analyte, solvents used and impurities it may contain; it could
also be affected by external factors such as pressure and temperature.)

Many theoretical relationships, such as fluorescence, require the determination of an instrumental


constant anyway, by analysis of one or more reference standards; a calibration curve is a convenient
extension of this approach. The calibration curve for a particular analyte in a particular (type of)
sample provides the empirical relationship needed for those particular measurements.

The chief disadvantages are (1) that the standards require a supply of the analyte material, preferably
of high purity and in known concentration, and (2) that the standards and the unknown are in the
same matrix. Some analytes - e.g., particular proteins - are extremely difficult to obtain pure in
sufficient quantity. Other analytes are often in complex matrices, e.g., heavy metals in pond water. In
this case, the matrix may interfere with or attenuate the signal of the analyte. Therefore a comparison
between the standards (which contain no interfering compounds) and the unknown is not possible.
The method of standard addition is a way to handle such a situation.
Contents [hide]
1 Error in calibration curve results
2 Applications
3 Notes
4 Bibliography

Error in calibration curve results[edit]

As expected, the concentration of the unknown will have some error which can be calculated from the
formula below.[5][6] This formula assumes that a linear relationship is observed for all the standards.
It is important to note that the error in the concentration will be minimal if the signal from the
unknown lies in the middle of the signals of all the standards (the term goes to zero if
)
is the standard deviation in the residuals

is the slope of the line


is the y-intercept of the line
is the number standards
is the number of replicate unknowns
is the measurement of the unknown
is the average measurement of the standards
are the concentrations of the standards
is the average concentration of the standards

Applications[edit]

Analysis of concentration
Verifying the proper functioning of an analytical instrument or a sensor device such as an ion selective
electrode
Determining the basic effects of a control treatment (such as a dose-survival curve in clonogenic
assay)

Calibrated geometry
From Wikipedia, the free encyclopedia
Jump to: navigation, search

In the mathematical field of differential geometry, a calibrated manifold is a Riemannian manifold


(M,g) of dimension n equipped with a differential p-form (for some 0 p n) which is a calibration
in the sense that
is closed: d = 0, where d is the exterior derivative
for any x M and any oriented p-dimensional subspace of TxM, | = vol with 1. Here vol is
the volume form of with respect to g.

Set Gx() = { as above : | = vol }. (In order for the theory to be nontrivial, we need Gx() to be
nonempty.) Let G() be the union of Gx() for x in M.

The theory of calibrations is due to R. Harvey and B. Lawson and others. Much earlier (in 1966)
Edmond Bonan introduced G2-manifold and Spin(7)-manifold, constructed all the parallel forms and
showed that those manifolds were Ricci-flat. Quaternion-Khler manifold were simultaneously
studied in 1965 by Edmond Bonan and Vivian Yoh Kraines and they constructed the parallel 4-form.
Calibrated submanifolds

A p-dimensional submanifold of M is said to be a calibrated submanifold with respect to (or


simply -calibrated) if T lies in G().

A famous one line argument shows that calibrated p-submanifolds minimize volume within their
homology class. Indeed, suppose that is calibrated, and is a p submanifold in the same homology
class. Then

where the first equality holds because is calibrated, the second equality is Stokes' theorem (as is
closed), and the third equality holds because is a calibration.

Calibration (statistics)
From Wikipedia, the free encyclopedia
Jump to: navigation, search

There are two main uses of the term calibration in statistics that denote special types of statistical
inference problems. Thus "calibration" can mean
A reverse process to regression, where instead of a future dependent variable being predicted from
known explanatory variables, a known observation of the dependent variables is used to predict a
corresponding explanatory variable.[1]
Procedures in statistical classification to determine class membership probabilities which assess the
uncertainty of a given new observation belonging to each of the already established classes.

In addition, "calibration" is used in statistics with the usual general meaning of calibration. For
example, model calibration can be also used to refer to Bayesian inference about the value of a
model's parameters, given some data set, or more generally to any type of fitting of a statistical
model.
Contents [hide]
1 In regression
2 In classification
3 See also
4 References

In regression[edit]
The calibration problem in regression is the use of known data on the observed relationship between
a dependent variable and an independent variable to make estimates of other values of the
independent variable from new observations of the dependent variable.[2][3][4] This can be known as
"inverse regression":[5] see also sliced inverse regression.

One example is that of dating objects, using observable evidence such as tree rings for
dendrochronology or carbon-14 for radiometric dating. The observation is caused by the age of the
object being dated, rather than the reverse, and the aim is to use the method for estimating dates
based on new observations. The problem is whether the model used for relating known ages with
observations should aim to minimise the error in the observation, or minimise the error in the date.
The two approaches will produce different results, and the difference will increase if the model is then
used for extrapolation at some distance from the known results.
In classification[edit]

Calibration in classification, see Classification (machine learning) and Statistical classification, is used
to transform classifier scores into class membership probabilities. An overview of calibration methods
for two-class and multi-class classification tasks is given by Gebel (2009).[6]

The following univariate calibration methods exist for transforming classifier scores into class
membership probabilities in the two-class case:
Assignment value approach, see Garczarek (2002)[7]
Bayes approach, see Bennett (2002)[8]
Isotonic regression, see Zadrozny and Elkan (2002)[9]
Logistic regression, see Platt (1999)[10]

The following multivariate calibration methods exist for transforming classifier scores into class
membership probabilities in the case with classes count greater than two:
Reduction to binary tasks and subsequent pairwise coupling, see Hastie and Tibshirani (1998)[11]
Dirichlet calibration, see Gebel (2009)

Color calibration
From Wikipedia, the free encyclopedia
Jump to: navigation, search This article appears to be written like an advertisement. Please help
improve it by rewriting promotional content from a neutral point of view and removing any
inappropriate external links. (November 2011)
This article's lead section may not adequately summarize key points of its contents. Please
consider expanding the lead to provide an accessible overview of all important aspects of the article.
(June 2009)

The aim of color calibration is to measure and/or adjust the color response of a device (input or
output) to a known state. In International Color Consortium (ICC) terms, this is the basis for an
additional color characterization of the device and later profiling.[1] In non-ICC workflows, calibration
refers sometimes to establishing a known relationship to a standard color space[2] in one go. The
device that is to be calibrated is sometimes known as a calibration source; the color space that serves
as a standard is sometimes known as a calibration target.[citation needed] Color calibration is a
requirement for all devices taking an active part of a color-managed workflow.
Color calibration is used by many industries, such as television production, gaming, photography,
engineering, chemistry, medical and more.
Contents [hide]
1 Information flow and output distortion
2 Color perception
3 Calibration techniques and procedures
3.1 Camera
3.2 Scanner
3.3 Display
3.4 Printer
4 See also
5 References
6 External links

Information flow and output distortion[edit]

Input data can come from device sources like digital cameras, image scanners or any other measuring
devices. Those inputs can be either monochrome (in which case only the response curve needs to be
calibrated, though in a few select cases one must also specify the color or spectral power distribution
that that single channel corresponds to) or specified in multidimensional color - most commonly in the
three channel RGB model. Input data is in most cases calibrated against a profile connection space
(PCS).[3]

One of the most important factors to consider when dealing with color calibration is having a valid
source. If your color measuring source does not match the displays capabilities, the calibration will be
ineffective and give false readings.

The main distorting factors on the input stage stem from the amplitude nonlinearity of the channel
response(s), and in the case of a multidimensional datastream the non-ideal wavelength responses of
the individual color separation filters (most commonly a color filter array (CFA)) in combination with
the spectral power distribution of the scene illumination.

After this the data is often circulated in the system translated into a working space RGB for viewing
and editing.

In the output stage when exporting to a viewing device such as a CRT or LCD screen or a digital
projector, the computer sends a signal to the computer's graphic card in the form RGB
[Red,Green,Blue]. The dataset [255,0,0] signals only a device instruction, not a specific color. This
instruction [R,G,B]=[255,0,0] then causes the connected display to show Red at the maximum
achievable brightness [255], while the Green and Blue components of the display remain dark [0]. The
resultant color being displayed, however, depends on two main factors:
the phosphors or another system actually producing a light that falls inside the red spectrum;
the overall brightness of the color resulting in the desired color perception: an extremely bright light
source will always be seen as white, irrespective of spectral composition.

Hence every output device will have its unique color signature, displaying a certain color according to
manufacturing tolerances and material deterioration through use and age. If the output device is a
printer, additional distorting factors are the qualities of a particular batch of paper and ink.
The conductive qualities and standards-compliance of connecting cables, circuitry and equipment can
also alter the electrical signal at any stage in the signal flow. (A partially inserted VGA connector can
result in a monochrome display, for example, as some pins are not connected.)
Color perception[edit]

Color perception is subject to ambient light levels, and the ambient white point; for example, a red
object looks black in blue light. It is therefore not possible to achieve calibration that will make a
device look correct and consistent in all capture or viewing conditions. The computer display and
calibration target will have to be considered in controlled, predefined lighting conditions.
Calibration techniques and procedures[edit]

Calibration Target of the "Mars Hand Lens Imager (MAHLI)" on the Mars Curiosity rover (September 9,
2012) (3-D image).

The most common form of calibration aims at adjusting cameras, scanners, monitors and printers for
photographic reproduction. The aim is that a printed copy of a photograph appear identical in
saturation and dynamic range to the original or a source file on a computer display. This means that
three independent calibrations need to be performed:
The camera or scanner needs a device-specific calibration to represent the original's estimated colors
in an unambiguous way.
The computer display needs a device-specific calibration to reproduce the colors of the image color
space.
The printer needs a device-specific calibration to reproduce the colors of the image color space.

These goals can either be realized via direct value translation from source to target, or by using a
common known reference color space as middle ground. In the most commonly used color profile
system, ICC, this is known as the PCS or "Profile Connection Space".
Camera[edit]

The camera calibration needs a known calibration target to be photographed and the resulting output
from the camera to be converted to color values. A correction profile can then be built using the
difference between the camera result values and the known reference values. When two or more
cameras need to be calibrated relatively to each other, to reproduce the same color values, the
technique of color mapping can be used.

Scanner[edit]
An IT8.7 Target by LaserSoft Imaging

For creating a scanner profile it needs a target source, such as an IT8-target, an original with many
small color fields, which was measured by the developer with a photometer. The scanner reads this
original and compares the scanned color values with the target's reference values. Taking the
differences of these values into account an ICC profile is created, which relates the device specific
color space (RGB color space) to a device independent color space (L*a*b color space). Thus, the
scanner is able to output with color fidelity to what it reads.
Display[edit]

For calibrating the monitor a colorimeter is attached flat to the display's surface, shielded from all
ambient light. The calibration software sends a series of color signals to the display and compares the
values that were actually sent against the readings from the calibration device. This establishes the
current offsets in color display. Depending on the calibration software and type of monitor used, the
software either creates a correction matrix (i.e. an ICC profile) for color values before being sent to
the display, or gives instructions for altering the display's brightness/contrast and RGB values through
the OSD. This tunes the display to reproduce fairly accurately the in-gamut part of a desired color
space. The calibration target for this kind of calibration is that of print stock paper illuminated by D65
light at 120 cd/m2.
Printer[edit]

The ICC profile for a printer is created by comparing a test print result using a photometer with the
original reference file. The testchart contains known CMYK colors, whose offsets to their actual L*a*b
colors scanned by the photometer are resulting in an ICC profile. Another possibility to ICC profile a
printer is to use a calibrated scanner as the measuring device for the printed CMYK testchart instead
of a photometer. A calibration profile is necessary for each printer/paper/ink combination.

Measurement uncertainty

In metrology, measurement uncertainty is a non-negative parameter characterizing the dispersion of


the values attributed to a measured quantity. The uncertainty has a probabilistic basis and reflects
incomplete knowledge of the quantity. All measurements are subject to uncertainty and a measured
value is only complete if it is accompanied by a statement of the associated uncertainty. Relative
uncertainty is the measurement uncertainty divided by the measured value.
Contents [hide]
1 Background
2 Random and systematic errors
3 Measurement model
4 Propagation of distributions
5 Type A and Type B evaluation of uncertainty
6 Sensitivity coefficients
7 Uncertainty evaluation
7.1 Models with any number of output quantities
8 Alternative perspective
9 See also
10 References
11 Further reading

Background[edit]

The purpose of measurement is to provide information about a quantity of interest a measurand.


For example, the measurand might be the size of a cylindrical feature per ASME Y14.5-2009, the
volume of a vessel, the potential difference between the terminals of a battery, or the mass
concentration of lead in a flask of water.

No measurement is exact. When a quantity is measured, the outcome depends on the measuring
system, the measurement procedure, the skill of the operator, the environment, and other effects.[1]
Even if the quantity were to be measured several times, in the same way and in the same
circumstances, a different measured value would in general be obtained each time, assuming the
measuring system has sufficient resolution to distinguish between the values.

The dispersion of the measured values would relate to how well the measurement is made. Their
average would provide an estimate of the true value of the quantity that generally would be more
reliable than an individual measured value. The dispersion and the number of measured values would
provide information relating to the average value as an estimate of the true value. However, this
information would not generally be adequate.

The measuring system may provide measured values that are not dispersed about the true value, but
about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero
when there is nobody on the scale, but to show some value offset from zero. Then, no matter how
many times the person's mass were re-measured, the effect of this offset would be inherently present
in the average of the values.

Measurement uncertainty has important economic consequences for calibration and measurement
activities. In calibration reports, the magnitude of the uncertainty is often taken as an indication of
the quality of the laboratory, and smaller uncertainty values generally are of higher value and of
higher cost. The American Society of Mechanical Engineers (ASME) has produced a suite of standards
addressing various aspects of measurement uncertainty. ASME B89.7.3.1, Guidelines for Decision
Rules in Determining Conformance to Specifications addresses the role of measurement uncertainty
when accepting or rejecting products based on a measurement result and a product specification.
ASME B89.7.3.2, Guidelines for the Evaluation of Dimensional Measurement Uncertainty, provides a
simplified approach (relative to the GUM) to the evaluation of dimensional measurement uncertainty.
ASME B89.7.3.3, Guidelines for Assessing the Reliability of Dimensional Measurement Uncertainty
Statements, examines how to resolve disagreements over the magnitude of the measurement
uncertainty statement. ASME B89.7.4, Measurement Uncertainty and Conformance Testing: Risk
Analysis, provides guidance on the risks involved in any product acceptance/rejection decision.
The "Guide to the Expression of Uncertainty in Measurement", commonly known as the GUM, is the
definitive document on this subject. The GUM has been adopted by all major National Measurement
Institutes (NMIs), by international laboratory accreditation standards such as ISO 17025 which is
required for ILAC accreditation, and employed in most modern national and international
documentary standards on measurement methods and technology. See Joint Committee for Guides in
Metrology.
Random and systematic errors[edit]
Main article: Measurement error It has been suggested that portions of this section be moved
into Measurement error. (Discuss)

There are two types of measurement error: systematic error and random error.

A systematic error (an estimate of which is known as a measurement bias) is associated with the fact
that a measured value contains an offset. In general, a systematic error, regarded as a quantity, is a
component of error that remains constant or depends in a specific manner on some other quantity.

A random error is associated with the fact that when a measurement is repeated it will generally
provide a measured value that is different from the previous value. It is random in that the next
measured value cannot be predicted exactly from previous such values. (If a prediction were possible,
allowance for the effect could be made.)

In general, there can be a number of contributions to each type of error.

The Performance Test Standard PTC 19.1-2005 Test Uncertainty, published by ASME, discusses
systematic and random errors in considerable detail. In fact, it conceptualizes its basic uncertainty
categories in these terms.
Measurement model[edit]

The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely.
For example, the bathroom scale may convert a measured extension of a spring into an estimate of
the measurand, the mass of the person on the scale. The particular relationship between extension
and mass is determined by the calibration of the scale. A measurement model converts a quantity
value into the corresponding value of the measurand.

There are many types of measurement in practice and therefore many models. A simple
measurement model (for example for a scale, where the mass is proportional to the extension of the
spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a
weighing, involving additional effects such as air buoyancy, is capable of delivering better results for
industrial or scientific purposes. In general there are often several different quantities, for example
temperature, humidity and displacement, that contribute to the definition of the measurand, and that
need to be measured.

Correction terms should be included in the measurement model when the conditions of measurement
are not exactly as stipulated. These terms correspond to systematic errors. Given an estimate of a
correction term, the relevant quantity should be corrected by this estimate. There will be an
uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of
systematic errors arise in height measurement, when the alignment of the measuring instrument is
not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the
alignment of the instrument nor the ambient temperature is specified exactly, but information
concerning these effects is available, for example the lack of alignment is at most 0.001 and the
ambient temperature at the time of measurement differs from that stipulated by at most 2 C.

As well as raw data representing measured values, there is another form of data that is frequently
needed in a measurement model. Some such data relate to quantities representing physical
constants, each of which is known imperfectly. Examples are material constants such as modulus of
elasticity and specific heat. There are often other relevant data given in reference books, calibration
certificates, etc., regarded as estimates of further quantities.

The items required by a measurement model to define a measurand are known as input quantities in
a measurement model. The model is often referred to as a functional relationship. The output
quantity in a measurement model is the measurand.

Formally, the output quantity, denoted by , about which information is required, is often related to
input quantities, denoted by , about which information is available, by a measurement model in the
form of

where is known as the measurement function. A general expression for a measurement model is

It is taken that a procedure exists for calculating given , and that is uniquely defined by this equation.
Propagation of distributions[edit]
See also: Propagation of uncertainty

The true values of the input quantities are unknown. In the GUM approach, are characterized by
probability distributions and treated mathematically as random variables. These distributions
describe the respective probabilities of their true values lying in different intervals, and are assigned
based on available knowledge concerning . Sometimes, some or all of are interrelated and the
relevant distributions, which are known as joint, apply to these quantities taken together.

Consider estimates , respectively, of the input quantities , obtained from certificates and reports,
manufacturers' specifications, the analysis of measurement data, and so on. The probability
distributions characterizing are chosen such that the estimates , respectively, are the expectations[2]
of . Moreover, for the th input quantity, consider a so-called standard uncertainty, given the symbol ,
defined as the standard deviation[2] of the input quantity . This standard uncertainty is said to be
associated with the (corresponding) estimate .

The use of available knowledge to establish a probability distribution to characterize each quantity of
interest applies to the and also to . In the latter case, the characterizing probability distribution for is
determined by the measurement model together with the probability distributions for the . The
determination of the probability distribution for from this information is known as the propagation of
distributions.[2]

The figure below depicts a measurement model in the case where and are each characterized by a
(different) rectangular, or uniform, probability distribution. has a symmetric trapezoidal probability
distribution in this case.
Once the input quantities have been characterized by appropriate probability distributions, and the
measurement model has been developed, the probability distribution for the measurand is fully
specified in terms of this information. In particular, the expectation of is used as the estimate of , and
the standard deviation of as the standard uncertainty associated with this estimate.

Often an interval containing with a specified probability is required. Such an interval, a coverage
interval, can be deduced from the probability distribution for . The specified probability is known as
the coverage probability. For a given coverage probability, there is more than one coverage interval.
The probabilistically symmetric coverage interval is an interval for which the probabilities (summing
to one minus the coverage probability) of a value to the left and the right of the interval are equal.
The shortest coverage interval is an interval for which the length is least over all coverage intervals
having the same coverage probability.

Prior knowledge about the true value of the output quantity can also be considered. For the domestic
bathroom scale, the fact that the person's mass is positive, and that it is the mass of a person, rather
than that of a motor car, that is being measured, both constitute prior knowledge about the possible
values of the measurand in this example. Such additional information can be used to provide a
probability distribution for that can give a smaller standard deviation for and hence a smaller
standard uncertainty associated with the estimate of .[3][4][5]
Type A and Type B evaluation of uncertainty[edit]

Knowledge about an input quantity is inferred from repeated measured values (Type A evaluation of
uncertainty), or scientific judgement or other information concerning the possible values of the
quantity (Type B evaluation of uncertainty).

In Type A evaluations of measurement uncertainty, the assumption is often made that the
distribution best describing an input quantity given repeated measured values of it (obtained
independently) is a Gaussian distribution. then has expectation equal to the average measured value
and standard deviation equal to the standard deviation of the average. When the uncertainty is
evaluated from a small number of measured values (regarded as instances of a quantity characterized
by a Gaussian distribution), the corresponding distribution can be taken as a -distribution.[6] Other
considerations apply when the measured values are not obtained independently.

For a Type B evaluation of uncertainty, often the only available information is that lies in a specified
interval []. In such a case, knowledge of the quantity can be characterized by a rectangular probability
distribution[6] with limits and . If different information were available, a probability distribution
consistent with that information would be used.[7]
Sensitivity coefficients[edit]
Main article: Sensitivity analysis

Sensitivity coefficients describe how the estimate of would be influenced by small changes in the
estimates of the input quantities . For the measurement model , the sensitivity coefficient equals the
partial derivative of first order of with respect to evaluated at , , etc. For a linear measurement
model

with independent, a change in equal to would give a change in . This statement would generally be
approximate for measurement models . The relative magnitudes of the terms are useful in assessing
the respective contributions from the input quantities to the standard uncertainty associated with .
The standard uncertainty associated with the estimate of the output quantity is not given by the
sum of the , but these terms combined in quadrature,[8] namely by [an expression that is generally
approximate for measurement models ]

which is known as the law of propagation of uncertainty.

When the input quantities contain dependencies, the above formula is augmented by terms
containing covariances,[8] which may increase or decrease .
Uncertainty evaluation[edit]
See also: Uncertainty analysis and Quality of analytical results

The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting
of propagation and summarizing. The formulation stage constitutes
defining the output quantity (the measurand),
identifying the input quantities on which depends,
developing a measurement model relating to the input quantities, and
on the basis of available knowledge, assigning probability distributions Gaussian, rectangular, etc.
to the input quantities (or a joint probability distribution to those input quantities that are not
independent).

The calculation stage consists of propagating the probability distributions for the input quantities
through the measurement model to obtain the probability distribution for the output quantity , and
summarizing by using this distribution to obtain
the expectation of , taken as an estimate of ,
the standard deviation of , taken as the standard uncertainty associated with , and
a coverage interval containing with a specified coverage probability.

The propagation stage of uncertainty evaluation is known as the propagation of distributions, various
approaches for which are available, including
the GUM uncertainty framework, constituting the application of the law of propagation of
uncertainty, and the characterization of the output quantity by a Gaussian or a -distribution,
analytic methods, in which mathematical analysis is used to derive an algebraic form for the
probability distribution for , and
a Monte Carlo method,[2] in which an approximation to the distribution function for is established
numerically by making random draws from the probability distributions for the input quantities, and
evaluating the model at the resulting values.

For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is
used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy
that can be controlled.
Models with any number of output quantities[edit]

When the measurement model is multivariate, that is, it has any number of output quantities, the
above concepts can be extended.[9] The output quantities are now described by a joint probability
distribution, the coverage interval becomes a coverage region, the law of propagation of uncertainty
has a natural generalization, and a calculation procedure that implements a multivariate Monte Carlo
method is available.
Alternative perspective[edit]
Most of this article represents the most common view of measurement uncertainty, which assumes
that random variables are proper mathematical models for uncertain quantities and simple
probability distributions are sufficient for representing all forms of measurement uncertainties. In
some situations, however, a mathematical interval rather than a probability distribution might be a
better model of uncertainty. This may include situations involving periodic measurements, binned
data values, censoring, detection limits, or plus-minus ranges of measurements where no particular
probability distribution seems justified or where one cannot assume that the errors among individual
measurements are completely independent.

A more robust representation of measurement uncertainty in such cases can be fashioned from
intervals.[10][11] An interval [a,b] is different from a rectangular or uniform probability distribution
over the same range in that the latter suggests that the true value lies inside the right half of the
range [(a + b)/2, b] with probability one half, and within any subinterval of [a,b] with probability equal
to the width of the subinterval divided by b a. The interval makes no such claims, except simply that
the measurement lies somewhere within the interval. Distributions of such measurement intervals
can be summarized as probability boxes and DempsterShafer structures over the real numbers,
which incorporate both aleatoric and epistemic uncertainties.
See also[edit]
Accuracy and precision
Confidence interval
Experimental uncertainty analysis
History of measurement
List of uncertainty propagation software
Propagation of uncertainty
Stochastic measurement procedure
Test method
Uncertainty
Uncertainty quantification
References[edit]
^ Bell, S. Measurement Good Practice Guide No. 11. A Beginner's Guide to Uncertainty of
Measurement. Tech. rep., National Physical Laboratory, 1999.
^ a b c d JCGM 101:2008. Evaluation of measurement data Supplement 1 to the "Guide to the
expression of uncertainty in measurement" Propagation of distributions using a Monte Carlo
method. Joint Committee for Guides in Metrology.
^ Bernardo, J., and Smith, A. Bayesian Theory. John Wiley & Sons, New York, USA, 2000. 3.20
^ Elster, C. Calculation of uncertainty in the presence of prior knowledge. Metrologia 44 (2007), 111
116. 3.20
^ EURACHEM/CITAC. Quantifying uncertainty in analytical measurement. Tech. Rep. Guide CG4, EU-
RACHEM/CITEC,[EURACHEM/CITAC Guide], 2000. Second edition.
^ a b JCGM 104:2009. Evaluation of measurement data An introduction to the "Guide to the
expression of uncertainty in measurement" and related documents. Joint Committee for Guides in
Metrology.
^ Weise, K., and Wger, W. A Bayesian theory of measurement uncertainty. Meas. Sci. Technol. 3
(1992), 111, 4.8.
^ a b JCGM 100:2008. Evaluation of measurement data Guide to the expression of uncertainty in
measurement, Joint Committee for Guides in Metrology.
^ Joint Committee for Guides in Metrology (2011). JCGM 102: Evaluation of Measurement Data
Supplement 2 to the "Guide to the Expression of Uncertainty in Measurement" Extension to Any
Number of Output Quantities (Technical report). JCGM. Retrieved 13 February 2013.
^ Manski, C.F. (2003); Partial Identification of Probability Distributions, Springer Series in Statistics,
Springer, New York
^ Ferson, S., V. Kreinovich, J. Hajagos, W. Oberkampf, and L. Ginzburg (2007); Experimental
Uncertainty Estimation and Statistics for Data Having Interval Uncertainty, Sandia National
Laboratories SAND 2007-0939
Further reading[edit]
JCGM 200:2008. International Vocabulary of Metrology Basic and general concepts and associated
terms, 3rd Edition. Joint Committee for Guides in Metrology.
ISO 3534-1:2006. Statistics Vocabulary and symbols Part 1: General statistical terms and terms
used in probability. ISO
JCGM 106:2012. Evaluation of measurement data The role of measurement uncertainty in
conformity assessment. Joint Committee for Guides in Metrology.
Cox, M. G., and Harris, P. M. SSfM Best Practice Guide No. 6, Uncertainty evaluation. Technical report
DEM-ES-011, National Physical Laboratory, 2006.
Cox, M. G., and Harris, P. M . Software specifications for uncertainty evaluation. Technical report
DEM-ES-010, National Physical Laboratory, 2006.
Grabe, M ., Measurement Uncertainties in Science and Technology, Springer 2005.
Grabe, M. Generalized Gaussian Error Calculus, Springer 2010.
Dietrich, C. F. Uncertainty, Calibration and Probability. Adam Hilger, Bristol, UK, 1991.
NIST. Uncertainty of measurement results.
Bich, W., Cox, M. G., and Harris, P. M. Evolution of the "Guide to the Expression of Uncertainty in
Measurement". Metrologia, 43(4):S161S166, 2006.
EA. Expression of the uncertainty of measurement in calibration. Technical Report EA-4/02, European
Co-operation for Accreditation, 1999.
Elster, C., and Toman, B. Bayesian uncertainty analysis under prior ignorance of the measurand versus
analysis using Supplement 1 to the Guide: a comparison. Metrologia, 46:261266, 2009.
Ferson, S., Kreinovich, V., Hajagos, J., Oberkampf, W., and Ginzburg, L. 2007. "Experimental
Uncertainty Estimation and Statistics for Data Having Interval Uncertainty". SAND2007-0939.
Lira., I. Evaluating the Uncertainty of Measurement. Fundamentals and Practical Guidance. Institute of
Physics, Bristol, UK, 2002.
Majcen N., Taylor P. (Editors), Practical examples on traceability, measurement uncertainty and
validation in chemistry, Vol 1, 2010; ISBN 978-92-79-12021-3.
UKAS. The expression of uncertainty in EMC testing. Technical Report LAB34, United Kingdom
Accreditation Service, 2002.
UKAS M3003 The Expression of Uncertainty and Confidence in Measurement (Edition 3, November
2012) UKAS
NPLUnc
Estimate of temperature and its uncertainty in small systems, 2011.
ASME PTC 19.1, Test Uncertainty, New York: The American Society of Mechanical Engineers; 2005
Introduction to evaluating uncertainty of measurement
Da Silva, R.B., Bulska, E., Godlewska-Zylkiewicz, B., Hedrich, M., Majcen, N., Magnusson, B., Marincic,
S., Papadakis, I., Patriarca, M., Vassileva, E., Taylor, P., Analytical measurement: measurement
uncertainty and statistics;ISBN 978-92-79-23070-7, 2012.
Arnaut, L. R. Measurement uncertainty in reverberation chambers I. Sample statistics. Technical
report TQE 2 2nd. ed., National Physical Laboratory, 2008.
Estimation of measurement uncertainty in chemical analysis (analytical chemistry) On-line course
created by I. Leito, L. Jalukse and I. Helm, University of Tartu, 2013.

System of measurement
From Wikipedia, the free encyclopedia
(Redirected from Systems of measurement)
Jump to: navigation, search

A system of measurement is a set of units which can be used to specify anything which can be
measured and were historically important, regulated and defined because of trade and internal
commerce. In modern systems of measurement, some quantities are designated as fundamental
units, meaning all other needed units can be derived from them, whereas in the early and most
historic eras, the units were given by fiat (see statutory law) by the ruling entities and were not
necessarily well inter-related or self-consistent.
Contents [hide]
1 History
1.1 Current practice
2 Metric system
3 Imperial and US customary units
4 Natural units
5 Non-standard units
5.1 Area
5.2 Energy
6 Units of currency
7 Historical systems of measurement
7.1 Afroasia
7.2 Asia
7.3 Europe
8 See also
8.1 Conversion tables
9 Notes and references
10 Bibliography
11 External links

History[edit]
Main article: History of measurement

Although we might suggest that the Egyptians had discovered the art of measurement, it is only with
the Greeks that the science of measurement begins to appear. The Greek's knowledge of geometry,
and their early experimentation with weights and measures, soon began to place their measurement
system on a more scientific basis. By comparison, Roman science, which came later, was not as
advanced...[1]

The French Revolution gave rise to acceptance of the metric system, and this has spread around the
world, replacing most customary units of measure. In most systems, length (distance), weight, and
time are fundamental quantities; or as has been now accepted as better in science, the substitution of
mass for weight, as a better more basic parameter. Some systems have changed to recognize the
improved relationship, notably the 1824 legal changes to the imperial system.

Later science developments showed that either electric charge or electric current may be added to
complete a minimum set of fundamental quantities by which all other metrological units may be
defined. (However, electrical units are not necessary for a minimum set. Gaussian units, for example,
have only length, mass, and time as fundamental quantities.) Other quantities, such as power, speed,
etc. are derived from the fundamental set; for example, speed is distance per unit time. Historically a
wide range of units was used for the same quantity, in several cultural settings, length was measured
in inches, feet, yards, fathoms, rods, chains, furlongs, miles, nautical miles, stadia, leagues, with
conversion factors which were not simple powers of ten or even simple fractions within a given
customary system. yes were they necessarily the same units (or equal units) between different
members of similar cultural backgrounds. It must be understood by the modern reader that
historically, measurement systems were perfectly adequate within their own cultural milieu, and the
understanding that a better more universal system (based on more rationale and fundamental units)
only gradually spread with the maturation and appreciation of the rigor characteristic of Newtonian
physics. Moreover, changing a measurement system has real fiscal and cultural costs as well as the
advantages that accrue from replacing one measuring system with a better one.

Once the analysis tools within that field were appreciated and came into widespread use in the
emerging sciences, especially in the applied sciences like civil and mechanical engineering, pressure
built up for conversion to a common basis of measurement. As people increasingly appreciated these
needs and the difficulties of converting between numerous national customary systems became more
widely recognised there was an obvious justification for an international effort to standardise
measurements. The French Revolutionary spirit took the first significant and radical step down that
road.

In antiquity, systems of measurement were defined locally, the different units were defined
independently according to the length of a king's thumb or the size of his foot, the length of stride, the
length of arm or per custom like the weight of water in a keg of specific size, perhaps itself defined in
hands and knuckles. The unifying characteristic is that there was some definition based on some
standard, however egocentric or amusing it may now seem viewed with eyes used to modern
precision. Eventually cubits and strides gave way under need and demand from merchants and
evolved to customary units.

In the metric system and other recent systems, a single basic unit is used for each fundamental
quantity. Often secondary units (multiples and submultiples) are used which convert to the basic units
by multiplying by powers of ten, i.e., by simply moving the decimal point. Thus the basic metric unit of
length is the metre; a distance of 1.234 m is 1234.0 millimetres, or 0.001234 kilometres.
Current practice[edit]
Main article: Metrication

Metrication is complete or nearly complete in almost all countries of the world. US customary units
are heavily used in the United States and to some degree Liberia. Traditional Burmese units of
measurement are used in Burma. U.S. units are used in limited contexts in Canada due to a high
degree of trade; additionally there is considerable use of Imperial weights and measures, despite de
jure Canadian conversion to metric.

A number of other jurisdictions have laws mandating or permitting other systems of measurement in
some or all contexts, such as the United Kingdom where for example its road signage legislation only
allows distance signs displaying imperial units (miles or yards)[2] or Hong Kong.[3]

In the United States, metric units are used almost universally in science, widely in the military, and
partially in industry, but customary units predominate in household use. At retail stores, the liter is a
commonly used unit for volume, especially on bottles of beverages, and milligrams are used to
denominate the amounts of medications, rather than grains. Also, other standardized measuring
systems other than metric are still in universal international use, such as nautical miles and knots in
international aviation and shipping.
Metric system[edit]
Main articles: Metric system and International System of Units

A baby bottle that measures in three measurement systemsimperial (UK), US customary, and
metric.

Metric systems of units have evolved since the adoption of the first well-defined system in France in
1795. During this evolution the use of these systems has spread throughout the world, first to non-
English-speaking countries, and then to English speaking countries.

Multiples and submultiples of metric units are related by powers of ten and their names are formed
with prefixes. This relationship is compatible with the decimal system of numbers and it contributes
greatly to the convenience of metric units.

In the early metric system there were two fundamental or base units, the metre for length and the
gram for mass. The other units of length and mass, and all units of area, volume, and compound units
such as density were derived from these two fundamental units.

Mesures usuelles (French for customary measurements) were a system of measurement introduced to
act as a compromise between the metric system and traditional measurements. It was used in France
from 1812 to 1839.

A number of variations on the metric system have been in use. These include gravitational systems,
the centimetregramsecond systems (cgs) useful in science, the metretonnesecond system (mts)
once used in the USSR and the metrekilogramsecond system (mks).

The current international standard metric system is the International System of Units (Systme
international d'units or SI) It is an mks system based on the metre, kilogram and second as well as
the kelvin, ampere, candela, and mole.

The SI includes two classes of units which are defined and agreed internationally. The first of these
classes are the seven SI base units for length, mass, time, temperature, electric current, luminous
intensity and amount of substance. The second of these are the SI derived units. These derived units
are defined in terms of the seven base units. All other quantities (e.g. work, force, power) are
expressed in terms of SI derived units.
Imperial and US customary units[edit]
Main articles: Imperial and US customary measurement systems, Imperial units, and US customary
units

Both imperial units and US customary units derive from earlier English units. Imperial units were
mostly used in the British Commonwealth and the former British Empire but in most Commonwealth
countries they have been largely supplanted by the metric system. They are still used for some
applications in the United Kingdom but have been mostly replaced by the metric system in
commercial, scientific, and industrial applications. US customary units, however, are still the main
system of measurement in the United States. While some steps towards metrication have been made
(mainly in the late 1960s and early 1970s), the customary units have a strong hold due to the vast
industrial infrastructure and commercial development.
While imperial and US customary systems are closely related, there are a number of differences
between them. Units of length and area (the inch, foot, yard, mile etc.) are identical except for
surveying purposes. The Avoirdupois units of mass and weight differ for units larger than a pound
(lb.). The imperial system uses a stone of 14 lb., a long hundredweight of 112 lb. and a long ton of
2240 lb. The stone is not used in the US and the hundredweights and tons are short being 100 lb. and
2000 lb. respectively.

Where these systems most notably differ is in their units of volume. A US fluid ounce (fl oz) c. 29.6
millilitres (ml) is slightly larger than the imperial fluid ounce (28.4 ml). However, as there are 16 US fl
oz to a US pint and 20 imp fl oz per imperial pint, these imperial pint is about 20% larger. The same is
true of quarts, gallons, etc. Six US gallons are a little less than five imperial gallons.

The Avoirdupois system served as the general system of mass and weight. In addition to this there are
the Troy and the Apothecaries' systems. Troy weight was customarily used for precious metals, black
powder and gemstones. The troy ounce is the only unit of the system in current use; it is used for
precious metals. Although the troy ounce is larger than its Avoirdupois equivalent, the pound is
smaller. The obsolete troy pound was divided into twelve ounces opposed to the sixteen ounces per
pound of the Avoirdupois system. The Apothecaries' system; traditionally used in pharmacology, now
replaced by the metric system; shares the same pound and ounce as the troy system but with
different further subdivisions.
Natural units[edit]

Natural units are physical units of measurement defined in terms of universal physical constants in
such a manner that some chosen physical constants take on the numerical value of one when
expressed in terms of a particular set of natural units. Natural units are natural because the origin of
their definition comes only from properties of nature and not from any human construct. Various
systems of natural units are possible. Below are listed some examples.
Geometric unit systems are useful in relativistic physics. In these systems the base physical units are
chosen so that the speed of light and the gravitational constant are set equal to unity.
Planck units are a form of geometric units obtained by also setting Boltzmann's constant, the Coulomb
force constant and the reduced Planck constant to unity. They might be considered unique in that
they are based only on properties of free space rather than any prototype, object or particle.
Stoney units are similar to Planck units but set the elementary charge to unity and allow Planck's
constant to float.
"Schrdinger" units are also similar to Planck units and set the elementary charge to unity too but
allow the speed of light to float.
Atomic units (au) are a convenient system of units of measurement used in atomic physics,
particularly for describing the properties of electrons. The atomic units have been chosen such that
the fundamental electron properties are all equal to one atomic unit. They are similar to
"Schrdinger" units but set the electron mass to unity and allow the gravitational constant to float.
The unit energy in this system is the total energy of the electron in the Bohr atom and called the
Hartree energy. The unit length is the Bohr radius.
Electronic units are similar to Stoney units but set the electron mass to unity and allow the
gravitational constant to float. They are also similar to Atomic units but set the speed of light to unity
and allow Planck's constant to float.
Quantum electrodynamical units are similar to the electronic system of units except that the proton
mass is normalised rather than the electron mass.
Non-standard units[edit]

Non-standard measurement units, sometimes found in books, newspapers etc., include:


Area[edit]
The American football field, which has a playing area 100 yards (91.4 m) long by 160 feet (48.8 m)
wide. This is often used by the American public media for the sizes of large buildings or parks: easily
walkable but non-trivial distances. Note that it is used both as a unit of length (100 yd or 91.4 m, the
length of the playing field excluding goal areas) and as a unit of area (57,600 sq ft or 5,350 m2), about
1.32 acres (0.53 ha).
British media also frequently uses the football pitch for equivalent purposes, although soccer pitches
are not of a fixed size, but instead can vary within defined limits (100130 yd or 91.4118.9 m long,
and 50100 yd or 45.791.4 m wide, giving an area of 5,000 to 13,000 sq yd or 4,181 to 10,870 m2).
However the UEFA Champions League field must be exactly 105 by 68 m (114.83 by 74.37 yd) giving an
area of 7,140 m2 (0.714 ha) or 8,539 sq yd (1.764 acres). Example: HSS vessels are aluminium
catamarans about the size of a football pitch... - Belfast Telegraph 23 June 2007
Energy[edit]
A ton of TNT equivalent, and its multiples the kiloton, the megaton, and the gigaton. Often used in
stating the power of very energetic events such as explosions and volcanic events and earthquakes
and asteroid impacts. A gram of TNT as a unit of energy has been defined as 1000 thermochemical
calories (1,000 cal or 4,184 J).
The atom bomb dropped on Hiroshima. Its force is often used in the public media and popular books
as a unit of energy. (Its yield was roughly 13 kilotons, or 60 TJ.)
One stick of dynamite
Units of currency[edit]

A unit of measurement that applies to money is called a unit of account. This is normally a currency
issued by a country or a fraction thereof; for instance, the US dollar and US cent (1100 of a dollar), or
the euro and euro cent.

ISO 4217 is the international standard describing three letter codes (also known as the currency code)
to define the names of currencies established by the International Organization for Standardization
(IOS).
Historical systems of measurement[edit]
Main article: History of measurement

Throughout history, many official systems of measurement have been used. While no longer in official
use, some of these customary systems are occasionally used in day to day life, for instance in cooking.
Afroasia[edit]
Arabic[4]
Egyptian
Hebrew (Biblical and Talmudic)
Maltese
Mesopotamian
Asia[edit]
See also: history of measurement systems in India
Chinese
Hindu
Japanese
Persian
Taiwanese
Tamil
Thai
Vietnamese
Nepalese
Europe[edit]Danish
Dutch
English
Finnish
French (now)
French (to 1795) German
Greek
Norwegian
Polish
Portuguese
Roman Romanian
Russian
Scottish
Spanish
Swedish
Tatar
Welsh

See also[edit]
Megalithic yard
Pseudoscientific metrology
Units of measurement
Weights and measures
Conversion tables[edit]
Approximate conversion of units
Conversion of units
Notes and references[edit]
^ Quoted from the Canada Science and Technology Museum website
^ "Statutory Instrument 2002 No. 3113 The Traffic Signs Regulations and General Directions 2002".
Her Majesty's Stationery Office (HMSO). 2002. Retrieved 18 March 2010.
^ HK Weights and Measures Ordinance
^ M. Ismail Marcinkowski, Measures and Weights in the Islamic World. An English Translation of
Professor Walther Hinz's Handbook Islamische Mae und Gewichte, with a foreword by Professor
Bosworth, F.B.A. Kuala Lumpur, ISTAC, 2002, ISBN 983-9379-27-5. This work is an annotated
translation of a work in German by the late German orientalist Walther Hinz, published in the
Handbuch der Orientalistik, erste Abteilung, Ergnzungsband I, Heft 1, Leiden, The Netherlands: E. J.
Brill, 1970.
Bibliography[edit]
Tavernor, Robert (2007), Smoot's Ear: The Measure of Humanity, ISBN 0-300-12492-9
External links[edit]
CLDR - Unicode localization of currency, date, time, numbers
A Dictionary of Units of Measurement
Old units of measure
Measures from Antiquity and the Bible Antiquity and the Bible
Reasonover's Land Measures A Reference to Spanish and French land measures (and their English
equivalents with conversion tables) used in North America
The Unified Code for Units of Measure[show]
vte
Systems and systems science
[show]
vte
Systems of measurement

Potrebbero piacerti anche