Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Week1
CourseIntroductionandStructure
MeasurementTerminology
UncertaintyAnalysis
Philosophical Points
What is the difference between a tinkerer and an
engineer ? After all, both types of people can
successfully make things work
My answer:
An engineer is one who uses science and mathematics to
predict what their creations will do, who measures their
performance and who analyzes results to determine
their adequacy and to improve their performance
In this course, we are working to teach you to be
engineers, therefore we will focus on:
Analytical prediction
Measurement
Analysis
Being an engineer also involves
Being systematic in our thinking processes
Logging results and observations
Other Philosophical Points
Why is this being taught in Mechanical Engineering
Department ?
Fields of engineering are merging
Mechanical Engineers required to use active
electronics and instrumentation in their designs
Why are there going to be lots of aerospace examples ?
Its my experience base and the reason Im at Rice is to
cultivate interest in Aerospace Engineering
Data Acquisition on the fly.
We are going to talk about field as well as laboratory
measurements
Data acquisition in moving vehicles is different from on
the lab bench
In addition to knowing how to make things work, you
need to understand why things work the way they do
Tremendous computer tools available to us. Use of
tools without understanding leads to a Garbage In-
Garbage Out phenomena.
Lab Course Outline and
Requirements
Class List
Review Syllabus
Labs
Project
LabNotebooksmustgetTAsignature
TitleofExperimentandDatePerformed
Objective
ApparatusList
SketchofApparatus
Procedure
DataandObservations
Sourceofthedata
Units
Comments
LabReports
StructureandPresentation
Organization
Completeness
SignificantDigits
UncertaintyAnalysis
Technical Report Checklist
Title
Abstract
Comes before table of contents
< 150 words
One paragraph
States objectives and scope of investigation
Summarizes key results
States principle conclusions
Table of Contents
Complete, well formatted, includes page numbers
Lists Appendices, Figures, and Tables
Technical Report Checklist
(cont.)
Introduction
Presents problem that motivates current study
Includes purpose of experiment
Literature Review (optional)
States method of investigation
Gives a road map of the report that follows
Theoretical Analysis
Provides models and formulas governing study
Number all equations
Define all terms in equations
Provide basic relationships only-long derivations
belong in Appendix
Technical Report Checklist
(cont.) Procedure
Experimental
Description of Apparatus/Experimental Equipment
Use illustrations and describe figures in words
Include uncertainty/accuracy of all instrumentation
Results
Given in logical order-order of significance
Graphs and/or Tables used to demonstrate results
and explained in text
Include uncertainty in results
Technical Report Checklist
(cont.)
Discussion
Compare results to theoretical expectations
Explain sources of experimental error and influence
Note important problems encountered in study
Conclusion
Summarizes results in light of problem governing
study
Assess the study in terms of original objectives and
purpose in the Introduction section of the report
Provide recommendations for future study (if
applicable)
Technical Report Checklist
(cont.)
References
Place numbered listing of references used at the end
of the paper
Many formats to choose from when referencing in
document
Number or Author, Date most common methods
Appendices
Used for non-essential but important information
Given a letter and descriptive title
MultiplicationorDivision
Product or quotient shall contain no more significant digits
than are contained in the number with the fewest significant
digitsusedinthemultiplicationordivision.
6.234x8.20/4.9585=10.309327=>10.3
SomeBasicDefinitions
Data:Informationobtainedbyexperimentalmeans.
Avariableisthebasicquantitybeingobserved.
Adiscretevariablehasdiscretevalueslikeheadsortailsonacoin
orthetossofadice.Qualitativemeasurement.
Acontinuousvariablehasacontinuousrangeofvalues.Pressure,
temperature,velocity,lengthetc.arecontinuousvariables.Weare
normallymeasuringcontinuousvariables.
Resolutionisthesmallestincrementofchangethatcanbe
determinedfromthetransducer/instrumentreadout.
Sensitivity is the change in the transducer/instrument
outputperunitchangeinthemeasuredquantity.
AccuracyandPrecision
Accuracyistheclosenessofameasurement
(or set of observations) to the true value.
Thehighertheaccuracythelowertheerror.
Accuracy is the extent to which a
reading might be wrong, and is
often quoted as a percentage of
full-scale reading of an instrument.
Precision:
Precision is a term which describes
an instruments degree of freedom
from random errors.
is the closeness of multiple observations or
repeatabilityofameasurement.Referstohow
closeasetofmeasurementsaretoeachother.
AccuracyversusPrecision
NotAccurateorPrecise PreciseandNOTAccurate
AccurateandNOTPrecise PreciseandAccurate
Bias, Precision, and Total
Error TotalError
BiasError
Precision
Error
XTrue Xmeasured
Instrumentation
Definitions
Readability
Closeness with which a scale may be
read.
Least Count
1 ct
Volts
Least Count
Sensitivity
y=mx+b
y cts
m = sensitivity
m
b
x lbs
Hysteresis
hysteresis
Accuracy
Instrument reading
45 deg
accuracy
Known Input
Precision
Standard deviation
# samples Readings
Mean reading
Input value
Error
The deviation of a reading from a known
input. Can be reduced by calibration.
Reading (cts)
EU value
error
Uncertainty
The portion of the error that cannot or is
not corrected for by calibration.
Uncertainty = 3 sd
Standard deviation (sd)
# samples Readings
Pressure Voltage
Static / Dynamic
Dynamic input changes with time.
Static does not.
Input
Dynamic
Static
Time
Frequency Response
Ao 1
AI
Frequency
Linear Freq. Response
The ratio of output to input amplitudes
remains the same over input frequency
range.
Ao 1
AI
Frequency
Natural Frequency
The frequency where the output to input
amplitude ratio increases greatly if
system is under damped.
Natural Frequency
Ao
AI
1
n
Frequency
Phase Shift
Time
Linear Amplitude Response
Ao
AI
AI
Rise Time or Delay
Time for the output amplitude to
rise to the input level after a
change in input level.
Input signal
Input
Output response
Rise Time
Time
Slew Rate
Input signal
Input
Output response
Slew rate
Time
Time Constant
Input
63.2%
Multipointcalibrationseveralinputsareused
WorkswhenoutputisNOTproportionaltoinput
Significantlyimprovesaccuracyofcalibration
CorrelationCoefficient(coefficientofdetermination)
Ameasureofthelinearrelationshipbetweentwoquantitative
variables.=>R2term0(nocorrelation)<R 2<1(perfectfit)
**Wewillreturntothisconceptlater!!
Reading Number
Single Point Calibration
Voltmeter, A Voltmeter B
1 104.5 90.0
2 101.5 91.5
3 96.0 89.5
4 105.5 90.5
5 97.0 88.5
6 100.0 89.5
7 95.0 90.5
8 103.5 89.5
9 101.5 91.5
10 101.5 89.5
Average 100.6 90.1
** Reference is 100 V
108.0
106.0 Voltmeter, A
104.0 Voltmeter B
102.0
Voltmeter Reading, V
100.0
98.0
96.0
94.0
92.0
90.0
88.0
86.0
0 2 4 6 8 10
Reading Number
Calibration Curve for Pressure
Transducer
500.0
450.0
400.0
y =51.801x +6.7638
R 2 =0.9933
350.0
Pressure, psia
300.0
250.0
200.0
150.0
100.0
50.0
0.0
0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00
Output Voltage, vdc
Calibration
The purpose of this section is to outline
the procedures for calibrating an
instruments while guaranteeing the
'goodness' of the calibration results.
Calibration is a measurement process
that assigns values to the property of an
instrument or to the response of an
instrument relative to reference
standards or to a designated
measurement process.
Purpose of calibration
The purpose of calibration is to
eliminate or reduce bias in the
user's measurement system
relative to the reference base. The
calibration procedure compares an
"unknown" or test item's or
instrument with reference
standards according to a specific
algorithm.
Issues in calibration
Calibration ensures that that the measurement
accuracy of all instrument used in a measurement
system is known over the whole range.
Environmental conditions must be the same as
those under which they were calibrated.
Under different environmental conditions,
appropriate correction has to be made.
Instrument calibration has to be repeated at
prescribed intervals.
Magnitude of the drift in characteristics depends
on the amount of uses, even ageing effect while
storage.
It is difficult or even impossible to determine the
required frequency of instrument recalibration.
Basic Instrument Calibration:
The
Users User Perspective
of electronic measurement instruments
have several key objectives in mind when
obtaining calibration/adjustment services, such
as:
Standards laboratory
Process instruments
Documentation in the
Anworkplace
essential element in the
maintenance of the measurement
system and the operation of calibration
procedures is the provision of full
documentation.
This must give a full description of the
measurement requirements throughout
the workplace, the instrument used, and
the calibration system and procedures
operated.
Important point to be noted
Documentation must include with a
statement of what measurement limits
have been defined.
The instruments specified for each
measurement must be listed.
The subject of calibration must be
defined.
Documentation must specify what
standard instruments are to be used for
the purpose.
Define a formal procedure of calibration.
A standard format for recording
Calibration result must be defined in the
documentation.
The documentation must specify
procedures which are to be followed if
an instrument is found to be outside the
calibration limits.
Calibration
The purpose of this section is to outline
the procedures for calibrating an
instruments while guaranteeing the
'goodness' of the calibration results.
Calibration is a measurement process
that assigns values to the property of an
instrument or to the response of an
instrument relative to reference
standards or to a designated
measurement process.
Purpose of calibration
The purpose of calibration is to
eliminate or reduce bias in the
user's measurement system
relative to the reference base. The
calibration procedure compares an
"unknown" or test item's or
instrument with reference
standards according to a specific
algorithm.
Issues in calibration
Calibration ensures that that the measurement
accuracy of all instrument used in a measurement
system is known over the whole range.
Environmental conditions must be the same as
those under which they were calibrated.
Under different environmental conditions,
appropriate correction has to be made.
Instrument calibration has to be repeated at
prescribed intervals.
Magnitude of the drift in characteristics depends
on the amount of uses, even ageing effect while
storage.
It is difficult or even impossible to determine the
required frequency of instrument recalibration.
Basic Instrument Calibration:
The
Users User Perspective
of electronic measurement instruments
have several key objectives in mind when
obtaining calibration/adjustment services, such
as:
Standards laboratory
Process instruments
Documentation in the
Anworkplace
essential element in the
maintenance of the measurement
system and the operation of calibration
procedures is the provision of full
documentation.
This must give a full description of the
measurement requirements throughout
the workplace, the instrument used, and
the calibration system and procedures
operated.
Important point to be noted
Documentation must include with a
statement of what measurement limits
have been defined.
The instruments specified for each
measurement must be listed.
The subject of calibration must be
defined.
Documentation must specify what
standard instruments are to be used for
the purpose.
Define a formal procedure of calibration.
A standard format for recording
Calibration result must be defined in the
documentation.
The documentation must specify
procedures which are to be followed if
an instrument is found to be outside the
calibration limits.
Labs 1A and 1B
Metric Micrometer
Metric based micrometer
reading.
9
no
37
9.37
Inch based micrometers
0.463
Calipers
Using Statistics to Estimate Error
in Measurements
Determined
experimentally
Calculated
Uncertainty
Multiple readings with a fixed known
input value.
Uncertainty = 3 sd
Standard deviation (sd)
# samples Readings
S c b b
c b
S
b
c S b
S
c
b
1
S b c c b
2 2 2
c
L
CL
QS Lift Example
1
C 2
C L
2
C L
2 2
CL L
L Q S
L Q S
C L 1
L Q S
C L L
2
Q Q S
C L L
S QS 2
1
1
2
L
2
L
2 2
CL L 2 Q S
QS Q S QS
2
Uncertainty Estimation
When we measure some physical quantity
with an instrument and obtain a numerical
value, we want to know how close this
value is to the true value. The difference
between the true value and the measured
value is the error. Unfortunately, the true
value is in general unknown and
unknowable. Since this is the case, the
exact error is never known. We can only
estimateerror.
TypesofErrors
Differencebetweenmeasuredresultandtruevalue.
Illegitimateerrors
Blundersresultfrommistakesinprocedure.Youmustbecareful.
Computationalorcalculationerrorsaftertheexperiment.
BiasorSystematicerrors
An error that persists and cannot be considered to exist entirely by
chance. This type of error tends to stay constant from trial to trial.
(e.g.zerooffset)
Systematicerrorscanbecorrectedthroughcalibration
FaultyequipmentInstrumentalwaysreads3%highorlow
Consistentorrecurringhumanerrorsobserverbias
This type of error cannot be studied theoretically but can be
determinedbycomparisontotheoryorbyalternatemeasurements.
TypesofErrors(cont.)
RandomorPrecisionerrors:
The deviation of the measurement from the true value
resulting from the finite precision of the measurement
methodbeingused.
Instrumentfrictionorhysteresis
Errorsfromcalibrationdrift
Variationofprocedureorinterpretationofexperimenters
Testconditionvariationsorenvironmentaleffects
ADD-Addition Method
Provides 99 % CI coverage
Used in aerospace applications/more conservative
U x , ADD Bx Px or U x , ADD Bx Px
How to Estimate Bias Error
Manufacturers Specifications
Assume manufacturer is giving max. error
Accuracy - %FS, %reading, offset, or some combination
(e.g., 0.1% reading+0.15 counts)
These are generally given at a 95% confidence interval
Independent Calibration
Device is calibrated to known accuracy
Regression techniques and accuracy of standards
Use smallest readable division
Typically 1/2 or 1/4 smallest division (judgment call)
Btotal ( Bi )
2 12
GeneralUncertaintyAnalysis
Theestimateofpossibleerroriscalleduncertainty.
Includesbothbiasandprecisionerrors.
Needtoidentifyallerrorsfortheinstrument(s).
Allmeasurementsshouldbegiveninthreeparts
Bestvalue/averagevalue
Confidencelimitsoruncertaintyinterval
Specifiedprobability/confidenceinterval(typically95%C.I.)
**Alwaysusea95%confidenceintervalinthroughoutthiscourse
Propagation of Error
Used to determine uncertainty of a
quantity that requires measurement
of several independent variables.
Volume of a cylinder = f(D,L)
Volume of a block = f(L,W,H)
Density of a gas = f(P,T)
R
2
R
2
R
2
U R U x1 U x2 ... U x N
x1 x 2 x N
Rules
Rule 1 Always solve the data reduction equation for the
experimentalresultsbeforedoingtheuncertaintyanalysis.
Rule2Alwaystrytodividetheuncertaintyanalysisexpressionby
theexperimentalresulttoseeifitcanbesimplified.
R=Constant
ApplyRSSFormulatodensityrelationship:
2
2
2
1 P 2
P
RSS p T 2 T
p T RT RT
Applyalittlealgebra: P
RT
2
p T
2
p T
Uncertainty Analysis in EES
Uncertainty Calculation in
EES
Experimental Data Analysis
References
ASHRAE, 1996. Engineering Analysis of
Experimental Data, ASHRAE Guideline 2-
1996
Range or Span:
The range or span of an instrument
defines the minimum and maximum
values of a quantity that the
instrument is designed to measure.
Bias:
Bias describes a constant error which
exists over the full range of measurement
of an instrument.
Linearity:
It is normally describes that the output
reading of an instrument is linearly
proportional to the quantity being
measured.
Sensitivity:
Sensitivity is a measure of the change in
instrument output which occurs when the
quantity being measured changes by a
given amount.
Sensitivity to disturbance:
Sensitivity to disturbance is a measure
of the magnitude of change in static
characteristic of a instrument due to
environmental changes.
1) Zero drift
2) Sensitivity drift (Scale factor drift)
Hysteresis:
The non-coincidence
between loading
and unloading
curves is known as
hysteresis.
Dead space:
Dead space is
defined as the range
of different input
values over which
there is no change
in output value.
Threshold:
The minimum level of input before
the change in the instrument output
reading is of large enough magnitude
to be detectable is known as the
threshold of the instrument.
Resolution:
The minimum reading that can be
taken from instruments.
The Dynamic Response of
Measuring Instruments
The dynamic response of a
measuring instrument is the
change in the output y caused by a
change in the input x. Both x and y
are functions of time t .
Classes of Linear Instruments
Zero Order Instruments
First Order Instruments
Second Order Instruments
Zero Order Instruments
A zero order linear instrument has an
output which is proportional to the
input at all times in accordance with
the equation
y(t) = Kx(t), where K is a constant
called the static gain of the
instrument.
The static gain is a measure of the
sensitivity of the instrument.
An example of a zero order linear
instrument is a wire strain gauge in
which the change in the electrical
resistance of the wire is proportional
to the strain in the wire.
First Order Instruments
A first order linear instrument has an
output which is given by a non-
homogeneous first order linear differential
equation
tau.dy(t)/dt + y(t) = K.x(t), where tau is a
constant, called the time constant of the
instrument.
In these instruments there is a time delay
in their response to changes of input. The
time constant tau is a measure of the time
delay.
Thermometers for measuring temperature
are first-order instruments.
Second Order Instruments
A second order linear instrument
has an output which is given by a non-
homogeneous second order linear
differential equation