Sei sulla pagina 1di 142

ME3031LectureNotes

Week1

CourseIntroductionandStructure
MeasurementTerminology
UncertaintyAnalysis
Philosophical Points
What is the difference between a tinkerer and an
engineer ? After all, both types of people can
successfully make things work
My answer:
An engineer is one who uses science and mathematics to
predict what their creations will do, who measures their
performance and who analyzes results to determine
their adequacy and to improve their performance
In this course, we are working to teach you to be
engineers, therefore we will focus on:
Analytical prediction
Measurement
Analysis
Being an engineer also involves
Being systematic in our thinking processes
Logging results and observations
Other Philosophical Points
Why is this being taught in Mechanical Engineering
Department ?
Fields of engineering are merging
Mechanical Engineers required to use active
electronics and instrumentation in their designs
Why are there going to be lots of aerospace examples ?
Its my experience base and the reason Im at Rice is to
cultivate interest in Aerospace Engineering
Data Acquisition on the fly.
We are going to talk about field as well as laboratory
measurements
Data acquisition in moving vehicles is different from on
the lab bench
In addition to knowing how to make things work, you
need to understand why things work the way they do
Tremendous computer tools available to us. Use of
tools without understanding leads to a Garbage In-
Garbage Out phenomena.
Lab Course Outline and
Requirements

Class List
Review Syllabus
Labs
Project
LabNotebooksmustgetTAsignature

TitleofExperimentandDatePerformed
Objective
ApparatusList
SketchofApparatus
Procedure
DataandObservations
Sourceofthedata
Units
Comments
LabReports
StructureandPresentation
Organization
Completeness
SignificantDigits
UncertaintyAnalysis
Technical Report Checklist
Title

Abstract
Comes before table of contents
< 150 words
One paragraph
States objectives and scope of investigation
Summarizes key results
States principle conclusions

Table of Contents
Complete, well formatted, includes page numbers
Lists Appendices, Figures, and Tables
Technical Report Checklist
(cont.)
Introduction
Presents problem that motivates current study
Includes purpose of experiment
Literature Review (optional)
States method of investigation
Gives a road map of the report that follows

Theoretical Analysis
Provides models and formulas governing study
Number all equations
Define all terms in equations
Provide basic relationships only-long derivations
belong in Appendix
Technical Report Checklist
(cont.) Procedure
Experimental
Description of Apparatus/Experimental Equipment
Use illustrations and describe figures in words
Include uncertainty/accuracy of all instrumentation

Description of Methods/Experimental Procedure


Use chronological organization
Avoid cookbook type formatText should flow
Avoid narrative of successes and failures

Results
Given in logical order-order of significance
Graphs and/or Tables used to demonstrate results
and explained in text
Include uncertainty in results
Technical Report Checklist
(cont.)
Discussion
Compare results to theoretical expectations
Explain sources of experimental error and influence
Note important problems encountered in study

Conclusion
Summarizes results in light of problem governing
study
Assess the study in terms of original objectives and
purpose in the Introduction section of the report
Provide recommendations for future study (if
applicable)
Technical Report Checklist
(cont.)
References
Place numbered listing of references used at the end
of the paper
Many formats to choose from when referencing in
document
Number or Author, Date most common methods

Appendices
Used for non-essential but important information
Given a letter and descriptive title

Take a look at some sample reports


Phases of Experimental
Program design phase
Preliminary
Select experimental approach
Design experiment parametrically
Often Iterative
Design the Hardware Process
Construction phase
Debugging/testing phase

Execution phase/Collect data


Data analysis phaseoften not complete
Reporting phaseoften gets
neglected/postponed
Experimental Approach-
Questions?
What question are you trying to answer?
(What is the problem?)
How accurately do you need to know the
answer? (How is the answer to be used?)
What physical principles are involved?
(What physical laws govern the situation?)
What experiments or set of experiments
might provide an answer?
What variables must be controlled? How
well?
More questions
What quantities must be
measured? How accurately?
What instrumentation is to be
used? Where do I obtain
information on instrumentation?
How is the data to be acquired,
conditioned, and stored?
How many data points must be
taken? Can requirements be met
within budget and time constraints?
More questions
What techniques of data analysis
should be used?
What is the most effective and
revealing way to present the data?

What unanticipated questions are


raised by the data?
In what manner should the data
and results be reported?
Basic Concepts and Definitions
Significant Figures and Rounding
Non-zero digits (i.e. 19) are always significant.
Zero is significant when it is between two non-
zero digits.
Only the final zero or trailing zeros in the
decimal portion of a number are significant.
Examples
3.0800 - five significant figures.
0.00418 - three significant figures
7.09 x 105 - three significant figures
SignificantDigits
AdditionorSubtraction
The number of significant digits in the answer will not be
greater than the number of significant digits in least precise
number.
6.234+8.2+4.95=19.384=>19.4

MultiplicationorDivision
Product or quotient shall contain no more significant digits
than are contained in the number with the fewest significant
digitsusedinthemultiplicationordivision.
6.234x8.20/4.9585=10.309327=>10.3
SomeBasicDefinitions
Data:Informationobtainedbyexperimentalmeans.
Avariableisthebasicquantitybeingobserved.
Adiscretevariablehasdiscretevalueslikeheadsortailsonacoin
orthetossofadice.Qualitativemeasurement.
Acontinuousvariablehasacontinuousrangeofvalues.Pressure,
temperature,velocity,lengthetc.arecontinuousvariables.Weare
normallymeasuringcontinuousvariables.

Resolutionisthesmallestincrementofchangethatcanbe
determinedfromthetransducer/instrumentreadout.
Sensitivity is the change in the transducer/instrument
outputperunitchangeinthemeasuredquantity.
AccuracyandPrecision
Accuracyistheclosenessofameasurement
(or set of observations) to the true value.
Thehighertheaccuracythelowertheerror.
Accuracy is the extent to which a
reading might be wrong, and is
often quoted as a percentage of
full-scale reading of an instrument.

Precision:
Precision is a term which describes
an instruments degree of freedom
from random errors.
is the closeness of multiple observations or
repeatabilityofameasurement.Referstohow
closeasetofmeasurementsaretoeachother.
AccuracyversusPrecision
NotAccurateorPrecise PreciseandNOTAccurate

AccurateandNOTPrecise PreciseandAccurate
Bias, Precision, and Total
Error TotalError

BiasError

Precision
Error

XTrue Xmeasured
Instrumentation

Definitions
Readability
Closeness with which a scale may be
read.
Least Count

Smallest difference that can be


detected.
Counts

1 ct

Volts
Least Count
Sensitivity

Ratio of linear movement (y) to the


change in measured variable (x).

y=mx+b
y cts

m = sensitivity
m
b

x lbs
Hysteresis

Difference in a reading due to the


direction of approach to the reading.

hysteresis
Accuracy

Deviation of the reading from a known


input.

Instrument reading

45 deg
accuracy

Known Input
Precision

Ability to reproduce a certain reading


with a given accuracy.

Standard deviation

# samples Readings

Mean reading
Input value
Error
The deviation of a reading from a known
input. Can be reduced by calibration.

Reading (cts)

EU value
error
Uncertainty
The portion of the error that cannot or is
not corrected for by calibration.

Uncertainty = 3 sd
Standard deviation (sd)

# samples Readings

Mean calibrated reading


Input EU value
Transducer
A device that transforms one physical
effect into another.

Pressure Voltage
Static / Dynamic
Dynamic input changes with time.
Static does not.

Input
Dynamic

Static

Time
Frequency Response

Overall behavior of system to the


frequency of a dynamic input.

Ao 1
AI

Frequency
Linear Freq. Response
The ratio of output to input amplitudes
remains the same over input frequency
range.

Ao 1
AI

Frequency
Natural Frequency
The frequency where the output to input
amplitude ratio increases greatly if
system is under damped.

Natural Frequency
Ao
AI
1

n
Frequency
Phase Shift

When the output is delayed in time


from the input.
Phase Shift

Time
Linear Amplitude Response

The ratio of output to input amplitudes


remains the same over input amplitude
range.

Ao
AI

AI
Rise Time or Delay
Time for the output amplitude to
rise to the input level after a
change in input level.

Input signal
Input

Output response
Rise Time

Time
Slew Rate

The rate of change of the output


amplitude.

Input signal
Input

Output response

Slew rate

Time
Time Constant

The time required for the output to


change 63.2% of its total change.
Input signal Output response

Input

63.2%

Time constant Time


Calibration
Instrumentcalibrationiswhenknowinputsarefed
intothetransducerandoutputsofthetransducerare
observed.
Singlepointoutput=inputxconstant
Outputisproportionaltoinput

Multipointcalibrationseveralinputsareused
WorkswhenoutputisNOTproportionaltoinput
Significantlyimprovesaccuracyofcalibration

CorrelationCoefficient(coefficientofdetermination)
Ameasureofthelinearrelationshipbetweentwoquantitative
variables.=>R2term0(nocorrelation)<R 2<1(perfectfit)
**Wewillreturntothisconceptlater!!
Reading Number
Single Point Calibration
Voltmeter, A Voltmeter B
1 104.5 90.0
2 101.5 91.5
3 96.0 89.5
4 105.5 90.5
5 97.0 88.5
6 100.0 89.5
7 95.0 90.5
8 103.5 89.5
9 101.5 91.5
10 101.5 89.5
Average 100.6 90.1
** Reference is 100 V

108.0
106.0 Voltmeter, A
104.0 Voltmeter B
102.0
Voltmeter Reading, V

100.0
98.0
96.0
94.0
92.0
90.0
88.0
86.0
0 2 4 6 8 10
Reading Number
Calibration Curve for Pressure
Transducer
500.0

450.0

400.0
y =51.801x +6.7638
R 2 =0.9933
350.0
Pressure, psia

300.0

250.0

200.0

150.0

100.0

50.0

0.0
0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00
Output Voltage, vdc
Calibration
The purpose of this section is to outline
the procedures for calibrating an
instruments while guaranteeing the
'goodness' of the calibration results.
Calibration is a measurement process
that assigns values to the property of an
instrument or to the response of an
instrument relative to reference
standards or to a designated
measurement process.
Purpose of calibration
The purpose of calibration is to
eliminate or reduce bias in the
user's measurement system
relative to the reference base. The
calibration procedure compares an
"unknown" or test item's or
instrument with reference
standards according to a specific
algorithm.
Issues in calibration
Calibration ensures that that the measurement
accuracy of all instrument used in a measurement
system is known over the whole range.
Environmental conditions must be the same as
those under which they were calibrated.
Under different environmental conditions,
appropriate correction has to be made.
Instrument calibration has to be repeated at
prescribed intervals.
Magnitude of the drift in characteristics depends
on the amount of uses, even ageing effect while
storage.
It is difficult or even impossible to determine the
required frequency of instrument recalibration.
Basic Instrument Calibration:
The
Users User Perspective
of electronic measurement instruments
have several key objectives in mind when
obtaining calibration/adjustment services, such
as:

Ensuring the validity (within specification) of


recent measurements.
Having a high confidence level that after a
calibration/adjustment process is completed, the
instrument will operate within specifications until
the next scheduled calibration/adjustment event.
Obtaining records/reports that satisfy the needs
of the user's company, including:
A calibration due date sticker on the product.
A calibration certificate showing a record of the
calibration type (usually a calibration laboratory
"standard" calibration, or a calibration that
complies with an industry standard).

Any out-of-tolerance points for the product that


are received prior to adjustments.

Calibration due reports that provide sufficient


time warning to allow for appropriate resource
planning.

Inventory tracking and status reporting of


instruments that are "out-of-service," denoting
where they are in the calibration/adjustment
process.
Controlling the cost of all the above
services to match the user's budget needs.
Keeping the "out-of-service" time period
to a minimum.
Making the calibration/adjustment
processes very reliable (that is, free of
errors).
Records that allow the user to evaluate
an instrument's performance and adjust
the time interval between
calibration/adjustment events based on
the instrument's performance with time
(drift rate).
The Manufacturer's
Perspective
Instrument manufacturers have operational
objectives that are very much in synch with those
of the users:
Products should easily meet their specifications
for the recommended period between
calibration/adjustment events, thereby reducing
the questions and concerns that instrument users
may have, and generating a very positive impact
on support costs and warranty claims/costs.
Calibration/adjustment service providers should
be able to perform the calibration/adjustment
process easily, efficiently and effectively.
Manufacturers know that keeping all costs
associated with calibration/adjustment to a
minimum will have a strong influence on users'
cost of ownership, and improve their instruments'
competitive position.
Minimize the equipment set needed to
perform calibration and adjustment, and
use equipment that is commonly
available in smaller calibration facilities
to increase the number of suppliers that
can provide "quality"
calibration/adjustment services.
Competition makes the cost of "quality"
services less expensive.
It is clear that the "quality" levels of the
calibration/adjustment services will
strongly dictate how well the issues and
objectives for both instrument users and
manufacturers will be met.
Process Instrument
Calibration
Calibration consists of comparing the output of the
process instrument being calibrated against the output
of a standard instrument of known accuracy, when the
same input is applied to both instruments.

Standard calibration instruments must be totally


separate.

The calibration function must be managed and


executed in a professional manner.

Separate room should be used for calibration purposes.

Better environmental control to be applied in the


calibration area
Level of environmental control should be
considered carefully.
Appropriate corrections must be made
for the deviation in the calibration
environmental conditions away from
those specified.
As far as management of calibration
function is concerned, it should be the
responsibility of only one person.
Calibration procedure which relate in any
way to measurement used for quality
control functions are controlled by British
Standard BS 5750.
Training must be adequate and targeted at the
particular needs of the calibration systems
involved

People must understand what they need to know


and specially why they must have this
information.

Determination of the frequency at which


instrument should be calibrated is dependent
upon several factors which require specialist
knowledge.

The quantities which effect the performance of


instrument over a period of time are mechanical
wear, dirt, ambient temperature and frequency of
usage.
A proper course of action must be
defined which describes the procedures
to be followed when an instrument is
found to be out of calibration.
Whatever system and frequency of
calibration is established, it is important
to review this from time to time to
ensure that the system remains
effective and efficient.
A separate proper maintenance record
should be kept for every instrument in
the factory, whether it is in use or kept
as a spare.
Standard laboratories
The instrument used for calibration
purpose is known as secondary
instrument. This must obviously be a
very well-engineered instrument which
gives high accuracy and is stabilized
against drift in its performance with
time.
When the working standard instrument
has been calibrated by an authorized
standard laboratory, a calibration
certificate will be issued, this will
contain following information's,
The identification of the equipment
calibrated.
The calibration results obtained.
The measurement uncertainty.
Any use limitations on the equipment
calibrated.
The date of calibration.
The authority under which the
certificate is issued.
Important
The establishment of a company
standards laboratory to provide a
calibration facility of the required quality
is economically possible only in the case
of very large companies with large
number of instruments to be calibrated.
In small and medium size companies,
they would normally use the calibration
service provided by various companies,
which specialize in offering a standard
laboratory.
Validation of standards
In laboratories
the United Kingdom, the
appropriate national standards
organization for validating standard
laboratories is the National physical
Laboratory. This has established a
National Measurement Accreditation
Service (NAMAS) which monitors
both instrument calibration and
mechanical testing laboratories.
Conditions for standard
laboratories
The head of laboratory must be suitably
qualified.
The management structure should be
such that Rush or skip calibration
procedures for production reasons can
be resisted.
Proper temperature and humidity
control must be provided.
High standard of cleanliness and
housekeeping must be maintained.
Full documentation must be maintained.
Primary reference standards
Primary reference standards, describes
the highest level of accuracy that is
achievable in the measurement of any
physical quantity.
All equipment used in standard
laboratories have themselves to be
calibrated against primary standards.
National standards organization
maintain suitable facilities for this
calibration.
Traceability
Calibration has a chain-like
structure in which every instrument
in the chain is calibrated against a
more accurate instrument
immediately above it in the chain.
The knowledge of full chain of
instruments involved in the
calibration procedure is known as
Traceability
Instrument Calibration chain

National standard organization

Standards laboratory

Company instrument laboratory

Process instruments
Documentation in the
Anworkplace
essential element in the
maintenance of the measurement
system and the operation of calibration
procedures is the provision of full
documentation.
This must give a full description of the
measurement requirements throughout
the workplace, the instrument used, and
the calibration system and procedures
operated.
Important point to be noted
Documentation must include with a
statement of what measurement limits
have been defined.
The instruments specified for each
measurement must be listed.
The subject of calibration must be
defined.
Documentation must specify what
standard instruments are to be used for
the purpose.
Define a formal procedure of calibration.
A standard format for recording
Calibration result must be defined in the
documentation.
The documentation must specify
procedures which are to be followed if
an instrument is found to be outside the
calibration limits.
Calibration
The purpose of this section is to outline
the procedures for calibrating an
instruments while guaranteeing the
'goodness' of the calibration results.
Calibration is a measurement process
that assigns values to the property of an
instrument or to the response of an
instrument relative to reference
standards or to a designated
measurement process.
Purpose of calibration
The purpose of calibration is to
eliminate or reduce bias in the
user's measurement system
relative to the reference base. The
calibration procedure compares an
"unknown" or test item's or
instrument with reference
standards according to a specific
algorithm.
Issues in calibration
Calibration ensures that that the measurement
accuracy of all instrument used in a measurement
system is known over the whole range.
Environmental conditions must be the same as
those under which they were calibrated.
Under different environmental conditions,
appropriate correction has to be made.
Instrument calibration has to be repeated at
prescribed intervals.
Magnitude of the drift in characteristics depends
on the amount of uses, even ageing effect while
storage.
It is difficult or even impossible to determine the
required frequency of instrument recalibration.
Basic Instrument Calibration:
The
Users User Perspective
of electronic measurement instruments
have several key objectives in mind when
obtaining calibration/adjustment services, such
as:

Ensuring the validity (within specification) of


recent measurements.
Having a high confidence level that after a
calibration/adjustment process is completed, the
instrument will operate within specifications until
the next scheduled calibration/adjustment event.
Obtaining records/reports that satisfy the needs
of the user's company, including:
A calibration due date sticker on the product.
A calibration certificate showing a record of the
calibration type (usually a calibration laboratory
"standard" calibration, or a calibration that
complies with an industry standard).

Any out-of-tolerance points for the product that


are received prior to adjustments.

Calibration due reports that provide sufficient


time warning to allow for appropriate resource
planning.

Inventory tracking and status reporting of


instruments that are "out-of-service," denoting
where they are in the calibration/adjustment
process.
Controlling the cost of all the above
services to match the user's budget needs.
Keeping the "out-of-service" time period
to a minimum.
Making the calibration/adjustment
processes very reliable (that is, free of
errors).
Records that allow the user to evaluate
an instrument's performance and adjust
the time interval between
calibration/adjustment events based on
the instrument's performance with time
(drift rate).
The Manufacturer's
Perspective
Instrument manufacturers have operational
objectives that are very much in synch with those
of the users:
Products should easily meet their specifications
for the recommended period between
calibration/adjustment events, thereby reducing
the questions and concerns that instrument users
may have, and generating a very positive impact
on support costs and warranty claims/costs.
Calibration/adjustment service providers should
be able to perform the calibration/adjustment
process easily, efficiently and effectively.
Manufacturers know that keeping all costs
associated with calibration/adjustment to a
minimum will have a strong influence on users'
cost of ownership, and improve their instruments'
competitive position.
Minimize the equipment set needed to
perform calibration and adjustment, and
use equipment that is commonly
available in smaller calibration facilities
to increase the number of suppliers that
can provide "quality"
calibration/adjustment services.
Competition makes the cost of "quality"
services less expensive.
It is clear that the "quality" levels of the
calibration/adjustment services will
strongly dictate how well the issues and
objectives for both instrument users and
manufacturers will be met.
Process Instrument
Calibration
Calibration consists of comparing the output of the
process instrument being calibrated against the output
of a standard instrument of known accuracy, when the
same input is applied to both instruments.

Standard calibration instruments must be totally


separate.

The calibration function must be managed and


executed in a professional manner.

Separate room should be used for calibration purposes.

Better environmental control to be applied in the


calibration area
Level of environmental control should be
considered carefully.
Appropriate corrections must be made
for the deviation in the calibration
environmental conditions away from
those specified.
As far as management of calibration
function is concerned, it should be the
responsibility of only one person.
Calibration procedure which relate in any
way to measurement used for quality
control functions are controlled by British
Standard BS 5750.
Training must be adequate and targeted at the
particular needs of the calibration systems
involved

People must understand what they need to know


and specially why they must have this
information.

Determination of the frequency at which


instrument should be calibrated is dependent
upon several factors which require specialist
knowledge.

The quantities which effect the performance of


instrument over a period of time are mechanical
wear, dirt, ambient temperature and frequency of
usage.
A proper course of action must be
defined which describes the procedures
to be followed when an instrument is
found to be out of calibration.
Whatever system and frequency of
calibration is established, it is important
to review this from time to time to
ensure that the system remains
effective and efficient.
A separate proper maintenance record
should be kept for every instrument in
the factory, whether it is in use or kept
as a spare.
Standard laboratories
The instrument used for calibration
purpose is known as secondary
instrument. This must obviously be a
very well-engineered instrument which
gives high accuracy and is stabilized
against drift in its performance with
time.
When the working standard instrument
has been calibrated by an authorized
standard laboratory, a calibration
certificate will be issued, this will
contain following information's,
The identification of the equipment
calibrated.
The calibration results obtained.
The measurement uncertainty.
Any use limitations on the equipment
calibrated.
The date of calibration.
The authority under which the
certificate is issued.
Important
The establishment of a company
standards laboratory to provide a
calibration facility of the required quality is
economically possible only in the case of
very large companies with large number of
instruments to be calibrated.

In small and medium size companies, they


would normally use the calibration service
provided by various companies, which
specialize in offering a standard laboratory.
Validation of standards
laboratories
In the United Kingdom, the
appropriate national standards
organization for validating standard
laboratories is the National physical
Laboratory. This has established a
National Measurement Accreditation
Service (NAMAS) which monitors
both instrument calibration and
mechanical testing laboratories.
Conditions for standard
laboratories
The head of laboratory must be suitably
qualified.
The management structure should be
such that Rush or skip calibration
procedures for production reasons can
be resisted.
Proper temperature and humidity
control must be provided.
High standard of cleanliness and
housekeeping must be maintained.
Full documentation must be maintained.
Primary reference standards
Primary reference standards, describes
the highest level of accuracy that is
achievable in the measurement of any
physical quantity.
All equipment used in standard
laboratories have themselves to be
calibrated against primary standards.
National standards organization
maintain suitable facilities for this
calibration.
Traceability
Calibration has a chain-like
structure in which every instrument
in the chain is calibrated against a
more accurate instrument
immediately above it in the chain.
The knowledge of full chain of
instruments involved in the
calibration procedure is known as
Traceability
Instrument Calibration chain

National standard organization

Standards laboratory

Company instrument laboratory

Process instruments
Documentation in the
Anworkplace
essential element in the
maintenance of the measurement
system and the operation of calibration
procedures is the provision of full
documentation.
This must give a full description of the
measurement requirements throughout
the workplace, the instrument used, and
the calibration system and procedures
operated.
Important point to be noted
Documentation must include with a
statement of what measurement limits
have been defined.
The instruments specified for each
measurement must be listed.
The subject of calibration must be
defined.
Documentation must specify what
standard instruments are to be used for
the purpose.
Define a formal procedure of calibration.
A standard format for recording
Calibration result must be defined in the
documentation.
The documentation must specify
procedures which are to be followed if
an instrument is found to be outside the
calibration limits.
Labs 1A and 1B
Metric Micrometer
Metric based micrometer
reading.

9
no
37
9.37
Inch based micrometers

0.463
Calipers
Using Statistics to Estimate Error
in Measurements

Theres no such thing as a


perfect measurement!!
Uncertainty
The portion of the error that cannot or is not
corrected for by calibration.

Determined
experimentally
Calculated
Uncertainty
Multiple readings with a fixed known
input value.

Uncertainty = 3 sd
Standard deviation (sd)

# samples Readings

Mean calibrated reading


Input EU value
Wing Area Example
S c b c
1
S
2
S
2 2

S c b b
c b
S
b
c S b
S
c
b


1
S b c c b
2 2 2

c
L
CL
QS Lift Example
1
C 2
C L
2
C L
2 2

CL L
L Q S
L Q S
C L 1

L Q S
C L L
2
Q Q S
C L L

S QS 2

1
1
2
L
2
L
2 2

CL L 2 Q S
QS Q S QS
2

Uncertainty Estimation
When we measure some physical quantity
with an instrument and obtain a numerical
value, we want to know how close this
value is to the true value. The difference
between the true value and the measured
value is the error. Unfortunately, the true
value is in general unknown and
unknowable. Since this is the case, the
exact error is never known. We can only
estimateerror.
TypesofErrors
Differencebetweenmeasuredresultandtruevalue.
Illegitimateerrors
Blundersresultfrommistakesinprocedure.Youmustbecareful.
Computationalorcalculationerrorsaftertheexperiment.

BiasorSystematicerrors
An error that persists and cannot be considered to exist entirely by
chance. This type of error tends to stay constant from trial to trial.
(e.g.zerooffset)
Systematicerrorscanbecorrectedthroughcalibration
FaultyequipmentInstrumentalwaysreads3%highorlow
Consistentorrecurringhumanerrorsobserverbias
This type of error cannot be studied theoretically but can be
determinedbycomparisontotheoryorbyalternatemeasurements.
TypesofErrors(cont.)
RandomorPrecisionerrors:
The deviation of the measurement from the true value
resulting from the finite precision of the measurement
methodbeingused.
Instrumentfrictionorhysteresis
Errorsfromcalibrationdrift
Variationofprocedureorinterpretationofexperimenters
Testconditionvariationsorenvironmentaleffects

Reduce random errors by conducting more


experiments/takemoredata.
Grouping & Categorizing Error
Sources
Calibration
Laboratory certification of equipment
Data Acquisition
Errors in data acquisition equipment
Data Reduction
Errors in computers and calculators
Errors of Method
Personal errors/blunders
How to combine bias and
precision error?independent uncertainties
Rules for combining
for measurements: Both uncertainties MUST be
at the same CI
RSS-Root-sum-square Method
Provides 95% CI coverage
Most commonly used/we will use this method throughout
course

U x Bx2 Px2 or U x Bx2 Px2

ADD-Addition Method
Provides 99 % CI coverage
Used in aerospace applications/more conservative
U x , ADD Bx Px or U x , ADD Bx Px
How to Estimate Bias Error
Manufacturers Specifications
Assume manufacturer is giving max. error
Accuracy - %FS, %reading, offset, or some combination
(e.g., 0.1% reading+0.15 counts)
These are generally given at a 95% confidence interval
Independent Calibration
Device is calibrated to known accuracy
Regression techniques and accuracy of standards
Use smallest readable division
Typically 1/2 or 1/4 smallest division (judgment call)

Summing Bias Error

Btotal ( Bi )
2 12
GeneralUncertaintyAnalysis
Theestimateofpossibleerroriscalleduncertainty.
Includesbothbiasandprecisionerrors.
Needtoidentifyallerrorsfortheinstrument(s).
Allmeasurementsshouldbegiveninthreeparts
Bestvalue/averagevalue
Confidencelimitsoruncertaintyinterval
Specifiedprobability/confidenceinterval(typically95%C.I.)

Uncertainty can be expressed in either absolute terms


(i.e.,5Volts0.5Volts)
orinpercentageterms
(i.e.,5Volts10%)(relativeuncertainty=V/V)

**Alwaysusea95%confidenceintervalinthroughoutthiscourse
Propagation of Error
Used to determine uncertainty of a
quantity that requires measurement
of several independent variables.
Volume of a cylinder = f(D,L)
Volume of a block = f(L,W,H)
Density of a gas = f(P,T)

Again, all variables must have the


same confidence interval to use this
method and be in proper dimensions.
RSS Method (Root Sum
Squares)
Forafunctiony(x ,x ,...,x ),theRSSuncertaintyisgivenby:
1 2 N

R
2
R
2
R
2

U R U x1 U x2 ... U x N
x1 x 2 x N
Rules
Rule 1 Always solve the data reduction equation for the
experimentalresultsbeforedoingtheuncertaintyanalysis.
Rule2Alwaystrytodividetheuncertaintyanalysisexpressionby
theexperimentalresulttoseeifitcanbesimplified.

Determine uncertainty in each independent variable in the


form(xNxN)
Use previously established methods including bias and precision
error.
RSS Method (Special Function
Form)
For relationships that are pure
products or quotients a simple
shortcut can be used to estimate
propagation of error.
R=k X1a X2b X3c
2 2 2
2
UR 2
U x1 2
U x2 U x3
a b c ......
R x1 x2 x3
Example Problem: Propagation
ofIdeal
Errorgas law: P

RT
Temperature
TT Howdowe
estimatetheerror
Pressure
inthedensity?
PP

R=Constant
ApplyRSSFormulatodensityrelationship:
2
2

2
1 P 2


P
RSS p T 2 T
p T RT RT

Applyalittlealgebra: P

RT

2
p T
2

p T
Uncertainty Analysis in EES
Uncertainty Calculation in
EES
Experimental Data Analysis
References
ASHRAE, 1996. Engineering Analysis of
Experimental Data, ASHRAE Guideline 2-
1996

Deick, R.H., 1992. Measurement


Uncertainty, Methods and Applications,
ISA.

Coleman, H.W. and Steele, G.W., 1989.


Experimentation and Uncertainty Analysis
for Engineers.
Plotting and Data Analysis with
MicroSoft Excel
Outline
Basic Plotting with Excel
Regression Analysis
Example
Basic Plotting with Excel
Plotting Experimental Data
X-Y Plots
RULE: Data points are discreet;
therefore they should be represented
by symbols. Do not connect symbols
with lines. Functions, on the other
hand, are continuous hence they
should be represented by lines.
Basic Plotting with Excel
Create the basic plot.
Format the axis and titles
Axes should have clear labels and
units
e.g., Pressure, P (Pa)
Adjust the scale to maximize the
amount of plot space occupied by the
data.
Tick marks should be used
Add Greek letters.
Basic Plotting with Excel
Format the data series
Use open symbols before solid
symbols
Add legend if needed
Add error bars linked to the
worksheet.

Add additional data sets.


Plotting Common Sense
Colors and Font
Do not use Excel Chart Defaults
Black points are difficult to see on a gray
background.
Remove unnecessary borders and headers like
Sheet 1
Prepare the plot in Black & White only.
Color plots look nice in presentations and
reports, but office copiers and publishers are
still B&W only.
To a copier red and yellow both appear gray.
Format text for clarity
Superscript
Greek Symbols
Plotting Common Sense
Trend Line dos and donts
Avoid using Insert Trend Line because it
only gives, slope, intercept, and R2.
Use Analysis Tool Pack instead.
Use Insert Trend Line to obtain
polynomial fits only when a curve fit for
the data is required and one is not
concerned with the underlying physics.
DO NOT insert trend lines for cosmetic
reasons.
Measurements Lab Reporting
Requirements
Present the plot, clearly labeled, error bars,
etc.
If the plot is included directly in the body of
a report, do not insert a title. Use figure
captions to describe the plot.
Present the original worksheet used to
analyze and plot the data that we can spot
mistakes and give partial credit. Also,
neatly format and annotated so that we can
follow your analysis.
Sample calculations (longhand or computer
generated) of the data and uncertainty
analysis so that we can give partial credit.
Instrument Classification
1)Active Instrument
2)Passive Instruments
Instruments are either active or
passive according to whether the
instrument output is entirely
produced by the quantity being
measured or whether the quantity
being measured simply modulates
the magnitude of some external
power source.
Examples
An example of passive
instrument is the pressure
measuring device in which
the pressure of the fluid is
translated into movement of
a pointer against a scale.
An example of a active
instrument is a float-type
petrol-tank level indicator.
Points to remember
In active instruments the external
power source is usually electrical in
form, but in some cases it can be
pneumatic or hydraulic.
One important difference between
active and passive instruments is
the level of measurement resolution
which can be obtained.
Passive instruments are normally
cheaper to manufacture then active
instruments.
Static characteristics of
instruments
Accuracy
Precision/repeatability
Tolerance
Range or span
Bias
Linearity
Sensitivity of measurement
Sensitivity to disturbance
Hysteresis
Dead space
Threshold
Resolution
Tolerance:
Tolerance is a term which is closely
related to accuracy and defines the
maximum error which is to be
expected in some value.

Range or Span:
The range or span of an instrument
defines the minimum and maximum
values of a quantity that the
instrument is designed to measure.
Bias:
Bias describes a constant error which
exists over the full range of measurement
of an instrument.
Linearity:
It is normally describes that the output
reading of an instrument is linearly
proportional to the quantity being
measured.
Sensitivity:
Sensitivity is a measure of the change in
instrument output which occurs when the
quantity being measured changes by a
given amount.
Sensitivity to disturbance:
Sensitivity to disturbance is a measure
of the magnitude of change in static
characteristic of a instrument due to
environmental changes.

Such changes effect instrument in two


main ways.

1) Zero drift
2) Sensitivity drift (Scale factor drift)
Hysteresis:
The non-coincidence
between loading
and unloading
curves is known as
hysteresis.

Dead space:
Dead space is
defined as the range
of different input
values over which
there is no change
in output value.
Threshold:
The minimum level of input before
the change in the instrument output
reading is of large enough magnitude
to be detectable is known as the
threshold of the instrument.
Resolution:
The minimum reading that can be
taken from instruments.
The Dynamic Response of
Measuring Instruments
The dynamic response of a
measuring instrument is the
change in the output y caused by a
change in the input x. Both x and y
are functions of time t .
Classes of Linear Instruments
Zero Order Instruments
First Order Instruments
Second Order Instruments
Zero Order Instruments
A zero order linear instrument has an
output which is proportional to the
input at all times in accordance with
the equation
y(t) = Kx(t), where K is a constant
called the static gain of the
instrument.
The static gain is a measure of the
sensitivity of the instrument.
An example of a zero order linear
instrument is a wire strain gauge in
which the change in the electrical
resistance of the wire is proportional
to the strain in the wire.
First Order Instruments
A first order linear instrument has an
output which is given by a non-
homogeneous first order linear differential
equation
tau.dy(t)/dt + y(t) = K.x(t), where tau is a
constant, called the time constant of the
instrument.
In these instruments there is a time delay
in their response to changes of input. The
time constant tau is a measure of the time
delay.
Thermometers for measuring temperature
are first-order instruments.
Second Order Instruments
A second order linear instrument
has an output which is given by a non-
homogeneous second order linear
differential equation

d2y(t)/dt2 + 2.rho.omega.dy(t)/dt + omega2.y(t) =


K.omega2.x(t),

where rho is a constant, called the


damping factor of the instrument, and
omega is a constant called the natural
frequency of the instrument.
Under a static input a second order
linear instrument tends to oscillate
about its position of equilibrium. The
natural frequency of the instrument is
the frequency of these oscillations
Friction in the instrument opposes
these oscillations with a strength
proportional to the rate of change of
the output. The damping factor is a
measure of this opposition to the
oscillations
An example of a second order linear instrument is a
galvanometer which measures an electrical current by
the torque on a coil carrying the current in a magnetic
field. The rotation of the coil is opposed by a spring.
The strength of the spring and the moment of inertia
of the coil determine the natural frequency of the
instrument. The damping of the oscillations is by
mechanical friction and electrical eddy currents.

Another example of a second order linear instrument


is a U-tube manometer for measuring pressure
differences. The liquid in the U-tube tends to oscillate
from side to side in the tube with a frequency
determined by the weight of the liquid. The damping
factor is determined by viscosity in the liquid and
friction between the liquid and the sides of the tube.

Potrebbero piacerti anche