Sei sulla pagina 1di 34

Characteristics of measurement systems

• To choose the instrument, most suited to a particular measurement application, we have to know the system
characteristics.
• The performance characteristics may be broadly divided into two groups, namely ‘static’ and ‘dynamic’
characteristics.
• Static characteristics

• the performance criteria for the measurement of quantities that remain constant, or vary only quite slowly.

• Dynamic characteristics

• the relationship between the system input and output when the measured quantity (measurand) is varying rapidly.
Static Performance of Instrument

• Range/Full Scale
• Span
• Linearity
• Sensitivity
• Sensitivity to disturbance
• Dead space/Threshold
• Resolution
• Accuracy
• Precision/repeatability/reproducibility
• Tolerance
Range/ Full-scale
• The range or span defines the minimum and maximum values of a
quantity that the instrument is designed to measure.
• The full-scale (f.s.) of an instrument is defined as the arithmetic difference
Range
-20 oC to 60 oC between the end points (of the range expressed in units of the output
variable.
Full scale reading
(f.s.) = 60 - -20 =
• The input range defines the minimum and maximum value of the variable
80 oC
to measure.
• The output range defines the minimum and maximum value of the signal
given by the transducer.
Span
• The input span is the maximum change of the input and the output span is the maximum
change of the output.
• Input span:
I MAX  I MIN

• Output span: OMAX  OMIN


Linearity
• It is normally desirable that the output reading of an instrument is linearly proportional to the
quantity being measured.
•An instrument is considered if the relationship
Omax
between output an input can be fitted in a line.

•In the figure shown, the x marks show a plot of


the typical output readings of an instrument
when a sequence of input quantities are applied
to it.
Omin
•By fitting a straight line through these marks, it Imin Imax
is clear that the device can be considered linear.
 OMAX  OMIN 
O  OMIN     I  I MIN 
 I MAX  I MIN 
• Non-linearity is defined as the maximum deviation of the output over the straight line

Non-Linearity can be quoted by: N  I   O I    K  I  a

Maximum % of No-Linearity : N max  I 


  100
OMAX  OMIN
Linearity
 Example
The deflection vs. load of a spring scale were recorded
as in the table below. Find the non-linearity of the xi yi xi*xi xi*yi
spring scale.
0 0 0 0
Load (N) Deflection (mm)
1 10 1 10
0 0
2 22 4 44
1 10
2 22 3 28 9 84
3 28 4 40 16 160
4 40

sum 10 100 30 298


mean 2 20
Linearity
 Example xi yi xi*xi xi*yi
0 0 0 0
The deflection vs. load of a spring scale were recorded as in the 1 10 1 10
table below. Find the non-linearity of the spring scale. 2 22 4 44
3 28 9 84
Load (N) Deflection (mm)
4 40 16 160
0 0
1 10
2 22 sum 10 100 30 298
3 28 mean 2 20
4 40 xi yi axi+b deviation
σ 𝑥 𝑖 𝑦 𝑖 −𝑛 𝑥 𝑚 𝑦 𝑚
0 0 0.4 -0.4
298−5×2×20
𝑎= σ 𝑥 𝑖2 −𝑛 𝑥 𝑚
2 = = 9.8 mm/N 1 10 10.2 -0.2
30−5×2 2
2 22 20 2
𝑏 = 𝑦𝑚 − 𝑎𝑥𝑚 = 20 − 9.8 × 2 = 0.4 mm 3 28 29.8 -1.8
4 40 39.6 0.4

Nonlinearity = (Max. deviation)/f.s *100%


Nonlinearity = (2/40)*100 = 5%
What is a linear System?
•If the Input is doubled, the
A System F(A) output is doubled. If the input is
multiplied by a scaling factor,
2*A System 2*F(A) the output is multiplied by the
same factor.
α*A System α*F(A) •This is called the scaling
property
A linear System satisfies the F(αA)=αF(A)
scaling property
What is a linear System?
•the output to the sum of
A System F(A) two or more inputs is the
sum of the outputs to each
B System F(B) individual input.

A+B System F(A) + F(B) •This is called the


superposition property
F(A+B)=F(A)+F(B)

A linear System satisfies the


superposition property
Linear Systems

In general. A linear system


A System F(A satisfies the scaling and the
)
superposition properties
B System 2*F(A)
F(αA+βB)=αF(A)+βF(B)
αA+βB System αF(A) + βF(B)
Sensitivity
• The sensitivity of measurement is a measure of the change in instrument output that occurs when
the quantity being measured changes by a given amount.
• Thus, sensitivity is the ratio:

The sensitivity of a measurement device is defined as the ratio:

change in instrument output


change in measured quantity
Example
The following resistance values of a platinum resistance thermometer were measured at a range
of temperatures. Determine the measurement sensitivity of the instrument in Ω/˚C.

Temperature Resistance (Ω)


(˚C)
200 307
230 314
260 321
290 328
Solution:
•By plotting these values on a graph, the relationship between temperature and resistance is a
straight line.
•For a 30˚C change in temperature, the change in resistance is 7Ω. Hence the measurement
sensitivity is 7/30 = 0.233 Ω/ ˚C. 13
Sensitivity to disturbance
• All calibrations and specifications of an instrument are only valid under controlled
conditions of temperature, pressure etc.

• As variations occur in the ambient temperature etc., certain instrument


characteristics change. The sensitivity to disturbance is a measure of the
magnitude of this change.

• Such environmental changes affect instruments in two main ways, known as zero
drift and sensitivity drift.

14
Sensitivity to disturbance
 Zero drift is sometimes known Zero drift or Bias
by the term, bias. This
describes the effect where the
zero reading of an instrument
is modified by a change in
ambient conditions.

 This causes a constant error


that exists over the full range
of measurement of the The mechanical bathroom scale is a common example of an
instrument. Zero drift is instrument that is prone to bias. It is quite usual tonfind that
normally removable by there is a reading of perhaps 1 kg with no one stood on the scale
calibration.
Sensitivity to disturbance
Zero drift or Bias
 Zero drift related to temperature
changes is measured are units/°C.
This is often called the temperature
zero drift coefficient.
 If the characteristic of an instrument
is sensitive to several environmental
parameters, then it will have several
zero drift coefficients, one for each
environmental parameter.
A typical change in the output
characteristic of a pressure gauge
subject to zero drift
Sensitivity to disturbance
Sensitivity Drift
 Also known as scale factor drift
 Defines the amount by which an
instrument’s sensitivity of measurement
varies as ambient conditions change.
 Quantified by sensitivity drift coefficients
that define how much drift there is for a
unit change in each environmental Change in the output characteristic due to sensitivity drift
parameter that the instrument
characteristics are sensitive to.
Classification of Drift
Threshold/Dead Space
• It is the range of input values over which there is no
change in output value. Any instrument that exhibits
hysteresis also displays dead space.

• If the input to an instrument is gradually increased from


zero, the input will have to reach a certain minimum Hysteresis and Backlash
level before the change in the instrument output reading
is large enough to be detectable. This minimum level of
input is known as the threshold of the instrument.

• Some manufacturers specify threshold for instruments


as absolute values, whereas others quote threshold as a
percentage of full-scale readings.
20
Hysteresis and Backlash

• Careful observation of the output/input relationship of a block will sometimes reveal


different results as the signals vary in direction of the movement.
• Mechanical systems will often show a small difference in length as the direction of the
applied force is reversed.
• The same effect arises as a magnetic field is reversed in a magnetic material.
• This characteristic is called hysteresis
• Where this is caused by a mechanism that gives a sharp change, such as caused by the
looseness of a joint in a mechanical joint, it is easy to detect and is known as backlash.
Resolution
• A Resolution is the smallest change in a physical
property that an instrument can sense / detect reliably

• Resolution is the closeness of the reading of the


measuring quantity with the true value / standard.
E.g.: Weighing machine

 When an instrument is showing a particular output When the needle of a car speedometer is between the scale
reading, there is a lower limit on the magnitude of markings, we cannot estimate speed more accurately than
to the nearest 2.5 km/h. This represents the resolution of
the change in the input measured quantity that the instrument.
produces an observable change in the instrument
output.
Resolution
• When an instrument is showing a particular output reading, there is a lower
limit on the magnitude of the change in the input measured quantity that
produces an observable change in the instrument output. That is, resolution
is the smallest change in the measured quantity that can be detected.
• Also, resolution is sometimes specified as an absolute value and sometimes
as a percentage of f.s. deflection.
• One of the major factors influencing the resolution of an instrument is how
finely its output scale is divided into subdivisions (think in the position of
the decimal point in an DMM display for example).

23
Accuracy
• The accuracy of an instrument is a measure of how close
the output reading of the instrument is to the correct value.
That is, how small is the error in the reading.

• Measurement uncertainty is the extent to which a reading


might be wrong, and is often quoted as a percentage of the
full-scale (f.s.) reading of an instrument.

24
Accuracy
Precision (repeatability)
• Precision is a term that describes how close are repeated measurements of the
same value of a measured variable.

• Repeated measurements of the same value can vary due to random errors. Thus,
precision describes the instrument’s degree of freedom from random errors.

• If a large number of readings are taken of the same quantity by a high precision
instrument, then the spread of readings will be very small.
26
Precision (repeatability)
• Precision is often, though incorrectly,
confused with accuracy. High precision does
not imply anything about measurement
accuracy.
• A high precision instrument may have a low
accuracy.
• Low accuracy measurements from a high
precision instrument are normally caused by
a bias in the measurements, which is
removable by recalibration. 27
Tolerance
• Tolerance is a term that is closely related to accuracy and defines the
maximum error that is to be expected in some value.
• The accuracy of some instruments is sometimes quoted as a tolerance
figure.
• For instance, crankshafts are machined with a diameter tolerance
quoted as so many microns, and electric circuit components such as
resistors have tolerances of perhaps 5%. One resistor chosen at random
from a batch having a nominal value 1000Ω and tolerance 5% might
have an actual value anywhere between 950Ω and 1050Ω.
28
Accuracy/Tolerance
 The accuracy of an instrument is a measure
of how close the output reading of the Actual
Temperature
instrument is to the correct value. could be between
23 °C and 27 °C
 In practice, manufacturers quote the Thermometer
inaccuracy figure rather than the accuracy reading = 25 °C

figure for an instrument. Inaccuracy is the


extent to which a reading might be wrong,
and is often quoted as a percentage of the
full-scale (f.s.) reading of an instrument.

Max Error
Inaccuracy   100%
Full scale reading
Accuracy/Tolerance
 Example 3.1:
A pressure gauge with a range of 6-10 bar and an inaccuracy of ±3.0% is
used to measure the pressure in a tank. If the reading of the gauge is 7.0
bar, what are the maximum and minimum values of the actual pressure in
the tank? Max Error
Inaccuracy   100%
Full scale reading
Accuracy/Tolerance
 Example 3.1:
A pressure gauge with a range of 6-10 bar and an inaccuracy of ±3.0% is used to measure the
pressure in a tank. If the reading of the gauge is 7.0 bar, what are the maximum and
minimum values of the actual pressure in the tank?
Solution:
Max Error = (f.s.) x inaccuracy = (10 – 6) x ±3.0% = ± 0.12 bar
Maximum value of the actual pressure = 7.0 + 0.12 = 7.12 bar
Minimum value of the actual pressure = 7.0 - 0.12 = 6.88 bar
 It is an important system design rule that instruments are chosen such that their range is
appropriate to the spread of values being measured, in order that the best possible accuracy
is maintained in instrument readings.
Accuracy/Tolerance
 Tolerance is a term that is closely related to
accuracy and defines the maximum error that is
to be expected in some value
 It is commonly used to describe the maximum
deviation of a manufactured component from
some specified value. For instance, crankshafts 50±0.1 mm
are machined with a diameter tolerance quoted as
so many millimeter. Maximum allowable diameter : 50.01
mm
 One resistor chosen at random from a batch
having a nominal value 1000 W and tolerance Minimum allowable diameter: 49.9 mm
5% might have an actual value anywhere Tolerance: 0.1 mm
between 950W and 1050 W.
Precision/repeatability/reproducibility
 Precision is a term that describes an instrument’s
degree of freedom from random errors. If a large
number of readings are taken of the same quantity by a
high precision instrument, then the spread of readings
will be very small.
 A high precision instrument may have a low accuracy.
Low accuracy measurements from a high precision
instrument are normally caused by a bias in the
measurements, which is removable by recalibration.
 The degree of repeatability or reproducibility in
measurements from an instrument is an alternative way
of expressing its precision.
Precision/repeatability/reproducibility

 Repeatability describes the closeness of output readings


when the same input is applied repetitively over a short
period of time, with the same measurement conditions,
same instrument and observer, same location and same
conditions of use maintained throughout.
 Reproducibility describes the closeness of output readings
for the same input when there are changes in the method of
measurement, observer, measuring instrument, location,
conditions of use and time of measurement.

Potrebbero piacerti anche