Sei sulla pagina 1di 29

Functions and Characteristics

of Instruments
Three Basic Functions
1. Indicating
2. Recording
3. Controlling

A particular instrument may serve any one or all three of these


functions simultaneously.
Six Electrical Quantities
1. Electric Charge (Q)
2. Electric Current (I)
3. Electromotive Force or Potential Difference (V)
4. Resistance (R)
5. Inductance (L)
6. Capacitance (C)
Measurement Standards
• All instruments are calibrated at the time of manufacture against a
measurement standard.
Four Categories of Standards
1. International Standards
2. Primary Standards
3. Secondary Standards
4. Working Standards
International Standards
• Are defined by international agreement.
• Are manufactured at the International Bureau of Weight and
Measurement in Paris and are periodically evaluated and checked by
absolute measurements in terms of the fundamental units of Physics.
• They represent certain units of measurement to the closest possible
accuracy attainable by the Science and Technology of Measurement.
Primary Standards
• Are maintained at national standards laboratories in different
countries.
• The National Bureau of Standards (NBS) in Washington D.C. is
responsible for maintaining the primary standards in North America.
• Are not available for use outside the national laboratories.
• Their principal function is the calibration and verification of secondary
standards.
Secondary Standards
• Are the basic reference standards used by measurement and calibration
laboratories in the industry to which they belong.
• Each industrial laboratory is completely responsible for its own secondary
standards.
• Each individual laboratory is completely responsible for its own secondary
standards.
• Each laboratory periodically send its secondary standards to the national
standards laboratory for calibration.
• After calibration the secondary standards are returned to the industrial
laboratory with a certification of measuring accuracy in terms of a primary
standard.
Working Standard
• Are the principal tools of a measurements laboratory.
• They are used to check and calibrate the instruments used in the
laboratory or to make comparison measurements in industrial
application.
Error in Measurement
• Measurement is the process of comparing an unknown quantity with
an accepted standard quantity.
• It involves connecting a measuring instrument into the system under
consideration and observing the resulting response on the
instrument.
• Thus the measurement obtained is a quantitative measure of the so-
called true value.
• Since it is very difficult to define the true value adequately, the term
expected value is used.
Error in Measurement
• Any measurement is affected by many variables; therefore, the results
rarely reflect the expected value.
• For example, connecting a measuring instrument into the circuit
under consideration always disturbs (changes) the circuit, causing the
measurement to differ from the expected value.
• Some factors that affect measurements are related to the measuring
instrument themselves.
• Other factors are related to the person using the instrument.
Error in Measurement
• The degree to which a measurement conforms to the expected value
is expressed in terms of the error of the measurement.
• Error may be expressed either as absolute or as a percent of error.
• The accuracy and precision of measurements depend not only on the
quality of the measuring instrument, but also on the person using the
instrument.
• However, regardless of the quality of the instrument or the care
exercise by the user, some errors is always present in measurements
of physical quantities.
Absolute error (e)
• may be defined as the difference between the expected value of the
variable and the measured value of the variable.

e = Yn – Xn
where
Yn = expected value
Xn = measured value
Percentage Error (%e)

%e = e/Yn x100 = |Yn-Xn|/Yn x 100


Relative Accuracy (A)
• It is frequently more desired to express measurements in terms of
relative accuracy rather than error.

A = 1 - |Yn –Xn|/ Yn
Example #1
The expected value of the voltage across a resistor is 50 V, however,
measurement yields a value of 49 V. Calculate (a) the absolute error,
(b) the percent of error, (c) the relative accuracy, and the percent of
accuracy.
Solution
(a) e = Yn – Xn
= 50 – 49
e=1V
(b) %e = (50 – 49) / 50 x 100
=2%
(c) A = 1 – 0.2
= 0.98
(d) %A = ( 100 – 0.98 ) x 100 = 98%
Precision (P)
• Accuracy implies precision.
• The precision of a measurement is a quantitative, or numerical,
indication of the closeness with which a repeated set of
measurements of the same variable agrees with the average of the
set of measurements.

P = 1 - |Xn – Xn| / Xn
where
Xn = the value of the nth measurement
Xn = the average of the set of n measurements
• An indication of the precision of a measurement is obtained from the
number of significant figures to which the result is expressed.
• Significant figures convey information regarding the magnitude and
preciseness of a quantity, additional significant digits represent a
more precise measurement.
• When making measurements or calculations, we retain only
significant figures.
• Significant figures are the figures, including zeros and estimated
figures that have been obtained from measuring instruments known
to be trustworthy.
Example #2
• The following set of ten measurements was recorded in the
laboratory. Calculate the precision of the fourth measurement.
Measurement Number Measurement Value Xn (Volts)
1 93
2 102
3 101
4 97
5 100
6 103
7 98
8 106
9 107
10 99
Error
• Error may be defined as the deviation of a reading or set of readings
from the expected value of the measured variable.
• Errors are generally categorized under the following:
- Gross Errors
- Systematic Errors
- Random Errors
Gross Errors
• Are generally the fault of the person using the instruments and are
due to such things as incorrect reading of instruments, incorrect
recording of experimental data, or incorrect use of instruments.
Systematic Errors
• Are errors due to problems with instruments, environment effects, or
observational errors.
• These errors recur if several measurements are made of the same
quantity under the same conditions.
Systematic Errors
1. Instrument errors – may be due to friction in the bearings of the
meter movement, incorrect spring tension, improper calibration, or
faulty instruments; such errors can be reduced by proper
maintenance use and handling of instruments.
2. Environmental errors – subjecting instruments to harsh
environments such as high temperature, pressure, or humidity, or
strong electrostatic or electromagnetic fields, may have detrimental
effects, thereby causing error.
Systematic Errors
3. Observational errors – errors introduced by the observer. The two
most common errors are probably the parallax error introduced in
reading a meter scale and the error of estimation when obtaining a
reading from a meter scale.
4. Random Errors – are those that remain after the gross and
systematic errors have been substantially reduced or at least accounted
for; are generally the accumulation of a large number of small effects
and may be of real concern only in measurements requiring a high
degree of accuracy. Such errors can only be analyzed statistically.
Terms and Definitions
• Instrument – A device or mechanism used to determine the present
value of a quantity under observation.
• Measurement – the art or process of determining the amount,
quantity, degree, or capacity by comparison (direct or indirect) with
accepted standards of the system of units employed.
• Expected value – the design value, that is “the most probable value”
that calculations indicate one should expect to measure.
• Accuracy – The degree of exactness of a measurement compared to
the expected value, or the most probable value, of the variable being
measured.
• Resolution – the smallest change in a measured variable to which an
instrument will respond.
• Precision – a measure of the consistency or repeatability of
measurements.
• Precision – at it applies to the instrument, is the consistency of the
instrument output for a given value of input
Example #3
The following table of values represents a meter output in terms of the
angular displacement of the needle, expressed in degrees, for a series
of identical input currents. Determine the worst-case precision of the
readings.

I input (A) 10 10 10 10 10 10 10 10
Output
Displacement 20.10 20.00 20.20 19.80 19.70 20.00 20.30 20.10
(Degrees)

Potrebbero piacerti anche