Sei sulla pagina 1di 29

UNCERTAINTY ANALYSIS

COURSE TITLE: ME - 455 MEASUREMENT AND INSTRUMENTATION

OBJECTIVES OF LECTURE

MEASUREMENT UNCERTAINTY
UNCERTAINTY ANALYSIS
MEASUREMENT ERRORS
DESIGN-STAGE UNCERTAINTY ANALYSIS
IDENTIFYING ERROR SOURCES

MEASUREMENT UNCERTAINTY
Inmetrology,measurement uncertaintyis a non-negative parameter
characterizing thedispersionof the values attributed to a measured
quantity.

All

measurements

are

subject

to

uncertainty

and

measurement result is complete only when it is accompanied by a


statement of the associated uncertainty. By international agreement,
this uncertainty has a probabilistic basis and reflects incomplete
knowledge of the quantity value.

UNCERTAINTY ANALYSIS
whenever we plan a test, we need to know something about the quality of the results.
Uncertainty analysis provides a methodical approach to estimating the quality of the results
from an anticipated test or from a completed test.
Errors are a property of the measurement. Measurement is the process of assigning a
value to a physical variable based on a sampling from the population of that variable.
Error causes a difference between the value assigned by measurement and the true
value of the population of the variable.
Measurement errors are introduced from various elements, for example, the individual
instrument calibrations, the data set finite statistics, and the approach used. But
because we do not know the true value and we only know the measured values, we do
not know the exact values of errors. Instead, we draw from what we do know about the
measurement to estimate a range of probable error. This estimate is an assigned value
called the uncertainty..

UNCERTAINTY ANALYSIS
The uncertainty describes an interval about the measured value within which we
suspect that the true value must fall with a stated probability.
Uncertainty analysis is the process of identifying, quantifying, and combining the
errors.
Uncertainty is a property of the result. The outcome of a measurement is a
result, and the uncertainty quantifies the quality of that result. Uncertainty
analysis provides a powerful design tool for evaluating different measurement
systems and methods, designing a test plan, and reporting uncertainty.
Errors are effects, and uncertainties are numbers. While errors are the effects
that cause a measured value to differ from the true value, the uncertainty is an
assigned numerical value that quantifies the probable range of these errors.

UNCERTAINTY ANALYSIS
There are two accepted professional documents on uncertainty analysis.
The American National Standards Institute / American Society of
Mechanical Engineers (ANSI / ASME) Power Test Codes (PTC) 19.1 Test
Uncertainty (2) is the United States engineering test standard, and our
approach favors that method.
The International Organization on Standardizations Guide to the
Expression of Uncertainty in Measurement (ISO GUM) (1) is an
international metrology standard.
The two differ in some terminology and how errors are cataloged. For
example, PTC 19.1 refers to random and systematic errors, terms that
classify errors by how they manifest themselves in the measurement. ISO
GUM refers to type A and type B errors, terms that classify errors by how
their uncertainties are estimated.

MEASUREMENT ERRORS

Errors are grouped into two categories: Systematic Error and Random Error.

A systematic error remains constant in repeated measurements under fixed operating conditions. A
systematic error may cause either a high or a low offset in the estimate of the true value of the measured
variable. Because its effect is constant, it can be difficult to estimate the value of a systematic error or in
many cases even recognize its presence. Accordingly, an estimate of the range of systematic error is
represented by an interval, defined as + b. The value b is the estimate of the systematic standard
uncertainty.
Systematic errors in experimental observations usually come from the measuring instruments. They may
occur because:
there is something wrong with the instrument or its data handling system, or
because the instrument is wrongly used by the experimenter.
Two types of systematic error can occur with instruments having a linear response:
Offset or zero setting error in which the instrument does not read zero when the quantity to be measured
is zero.
Multiplier or scale factor error in which the instrument consistently reads changes in the quantity to be
measured greater or less than the actual changes.
When repeated measurements are made under fixed operating conditions, random errors manifest
themselves as scatter of the measured data. Random error is introduced through the repeatability and
resolution of the measurement system components, calibration, and measurement procedure and
technique; by the measured variables own temporal and spatial variations; and by the variations in the
process operating and environmental conditions from which measurements are taken.

MEASUREMENT ERRORS
Consider

the

repeated

measurement of a variable under


conditions that are expected to
produce the same value of the
measured
relationship

variable.
between

The
the

true

value of the population and the


measured data set, containing both
systematic and random errors, can
be illustrated as in Figure.

Figure: Distribution of errors on repeated

DESIGN-STAGE UNCERTAINTY ANALYSIS


Design-stage uncertainty analysis refers to an analysis performed in the
formulation stage prior to a test.
It provides only an estimate of the minimum uncertainty based on the
instruments and method chosen.
If this uncertainty value is too large, then alternate approaches will need to
be

found.

So,

it

is

useful

for

selecting

instruments

and

selecting

measurement techniques.
At the test design stage, the measurement system and associated
procedures may be but a concept.

DESIGN-STAGE UNCERTAINTY ANALYSIS


Uncertainty analysis at this time is used to assist in selecting equipment
and test procedures based on their relative performance.
In the design stage, distinguishing between systematic and random errors
might be too difficult to be of concern. So for this initial discussion,
consider only sources of error and their assigned uncertainty in general.
A measurement system usually consists of sensors and instruments, each
with their respective contributions to system uncertainty.
We first discuss individual contributions to uncertainty.

DESIGN-STAGE UNCERTAINTY ANALYSIS


Even when all errors are zero, a measured value must be affected by our ability to
resolve the information provided by the instrument.
This zero-order uncertainty of the instrument, u0, assumes that the variation
expected in the measured values will be only that amount due to instrument
resolution and that all other aspects of the measurement are perfectly controlled.
Essentially, u0 is an estimate of the expected random uncertainty caused by the data
scatter due to instrument resolution.

IDENTIFYING ERROR SOURCES


Design-stage uncertainty provides essential information to assess instrument
selection and, to a limited degree, the measurement approach.
But it does not address all of the possible errors that influence a measured result.
Here we provide a helpful checklist of common errors. It is not necessary to
classify error sources as we do here, but it is a good bookkeeping practice.
Consider the measurement process as consisting of three distinct stages:
calibration, data acquisition, and data reduction.
Errors that enter during each of these steps can be grouped under their
respective error source heading: (1) Calibration Errors, (2) Data-Acquisition
Errors, And (3) Data Reduction Errors.

IDENTIFYING ERROR SOURCES


CALIBRATION ERRORS

Calibration in itself does not eliminate system errors but it can help to quantify the
uncertainty in the particular pieces of equipment used. Calibration errors include those
elemental errors that enter the measuring system during its calibration. Calibration errors
tend to enter through three sources:
(1) the standard or reference value used in the calibration,
(2) the instrument or system under calibration, and
(3) the calibration process.
For example, the laboratory standard used for calibration contains some inherent uncertainty,
and this is passed along with the input value on which the calibration is based. Measuring
system errors, such as linearity, repeatability, hysteresis, and so forth, contribute uncertainty.

IDENTIFYING ERROR SOURCES


DATA-ACQUISITION ERRORS

An error that arises during the act of measurement

is listed as a data-acquisition error. These errors


include sensor and instrument errors unaccounted
for by calibration; uncontrolled variables, such as
changes

or

unknowns

in

measurement

system

operating conditions; and sensor installation effects


on the measured variable. In addition, the quality of
the statistics of sampled variables is affected by
sample size and assumed distributions, and any
temporal and spatial variations, all contributing to
uncertainty. We list some common elemental errors
from this source in Table 5.2.

IDENTIFYING ERROR SOURCES


DATA-REDUCTION ERRORS

Curve fits and correlations with their

associated
introduce

unknowns

(Section

data-reduction

errors

4.6)
into

test results. Also, truncation errors,


interpolation errors, and errors from
assumed

models

or

functional

relationships affect the quality of a


result. We list elemental errors typical
of this error source group in Table 5.3.

Potrebbero piacerti anche