Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Test and Measurement: Know It All
Test and Measurement: Know It All
Test and Measurement: Know It All
Ebook1,822 pages16 hours

Test and Measurement: Know It All

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

The Newnes Know It All Series takes the best of what our authors have written to create hard-working desk references that will be an engineer's first port of call for key information, design techniques and rules of thumb. Guaranteed not to gather dust on a shelf!

Field Application engineers need to master a wide area of topics to excel. The Test and Measurement Know It All covers every angle including Machine Vision and Inspection, Communications Testing, Compliance Testing, along with Automotive, Aerospace, and Defense testing.
  • A 360-degree view from our best-selling authors
  • Topics include the Technology of Test and Measurement, Measurement System Types, and Instrumentation for Test and Measurement
  • The ultimate hard-working desk reference; all the essential information, techniques and tricks of the trade in one volume
LanguageEnglish
Release dateSep 26, 2008
ISBN9780080949680
Test and Measurement: Know It All

Read more from Jon S. Wilson

Related to Test and Measurement

Titles in the series (9)

View More

Related ebooks

Technology & Engineering For You

View More

Related articles

Related categories

Reviews for Test and Measurement

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Test and Measurement - Jon S. Wilson

    Scheiber

    Brief Table of Contents

    Copyright

    About the Authors

    I. Measurement Technology and Techniques

    Chapter 1. Fundamentals of Measurement

    Chapter 2. Sensors and Transducers

    Chapter 3. Data Acquisition Hardware and Software

    II. Measurement Systems

    Chapter 4. Overview of Measurement Systems

    Chapter 5. Acceleration, Shock, and Vibration

    Chapter 6. Flow

    Chapter 7. Temperature

    Chapter 8. Pressure

    Chapter 9. Position

    Chapter 10. Strain Gauges, Load Cells, and Weighing

    Chapter 11. Light

    III. Instrumentation Design Techniques for Test and Measurement

    Chapter 12. Signal Processing and Conditioning

    Chapter 13. Interfacing and Data Communications

    Chapter 14. Data Acquisition Software

    Chapter 15. Scaling and Calibration

    Chapter 16. Synthetic Instruments

    Chapter 17. Real-World Measurement Applications

    IV. Circuit and Board Testing

    Chapter 18. Testing Methods

    Chapter 19. Boundary Scan Techniques

    Chapter 20. Inspection Test

    V. EMC and RF Emissions Testing and Measurement

    Chapter 21. EMC Fundamentals

    Chapter 22. Measuring RF Emissions

    Chapter 23. Test Methods

    Chapter 24. Test Planning

    VI. Accelerated Testing

    Chapter 25. Accelerated Testing Fundamentals

    Chapter 26. HALT and FMVT

    Chapter . Hall-Effect Sensors

    Chapter . Sampling

    Chapter . Wireless Systems

    Table of Contents

    Copyright

    About the Authors

    I. Measurement Technology and Techniques

    Chapter 1. Fundamentals of Measurement

    1.1. Introduction

    1.2. Fundamental Concepts

    1.2.1. Measurand and Influence Quantity

    1.2.2. True Value (of a Quantity)

    1.2.3. Nominal Value and Conventional True Value

    1.2.4. Error and Relative Error of Measurement

    1.2.5. Random Error

    1.2.6. Systematic Error

    1.2.7. Accuracy and Precision

    1.2.8. Calibration

    1.2.9. Hierarchy of Measurement Standards

    1.2.10. Traceability

    1.2.11. Test Uncertainty Ratio

    1.2.12. Resolution, Discrimination, and Sensitivity

    1.2.13. Tolerance

    1.2.14. Repeatability of Measurements

    1.2.15. Reproducibility of Measurements

    Bibliography

    Chapter 2. Sensors and Transducers

    2.1. Basic Sensor Technology

    2.1.1. Sensor Data Sheets

    2.1.2. Sensor Performance Characteristics Definitions

    2.1.3. Sensor Performance Characteristics of an Example Device

    2.1.4. Introduction to Sensor Electronics

    2.1.5. Types of Sensors

    2.1.6. Sensor Limitations

    2.1.7. Filters

    2.1.8. Operational Amplifiers

    2.2. Sensor Systems

    2.3. Application Considerations

    2.4. Sensor Characteristics

    2.5. System Characteristics

    2.6. Instrument Selection

    2.6.1. Sensor

    2.6.2. Cable

    2.6.3. Power Supply

    2.6.4. Amplifier

    2.7. Data Acquisition and Readout

    2.8. Installation

    2.8.1. Sensor

    2.8.2. Cement Mounting

    2.8.3. Cable

    2.8.4. Power Supply, Amplifier, and Readout

    2.9. Measurement Issues and Criteria

    Chapter 3. Data Acquisition Hardware and Software

    3.1. ADCs

    3.1.1. Reference Voltage

    3.1.2. Output Word

    3.1.3. Resolution

    3.2. Types of ADCs

    3.2.1. Tracking ADC

    3.2.2. Flash ADC

    3.2.3. Successive Approximation Converter

    3.2.4. Dual-Slope (Integrating) ADC

    3.2.5. Sigma-Delta

    3.2.6. Half-Flash

    3.3. ADC Comparison

    3.4. Sample and Hold

    3.5. Real Parts

    3.5.1. Input Levels

    3.5.2. Internal Reference

    3.5.3. Reference Bypassing

    3.5.4. Internal S/H

    3.6. Microprocessor Interfacing

    3.6.1. Output Coding

    3.6.2. Parallel Interfaces

    3.6.3. Data Access Time

    3.6.4. –BUSY Output

    3.6.5. Bus Relinquish

    3.6.6. Coupling

    3.6.7. Delay between Conversions

    3.6.8. LSB Errors

    3.7. Clocked Interfaces

    3.8. Serial Interfaces

    3.8.1. SPI/Microwire

    3.8.2. I2C Bus

    3.8.3. SMBus

    3.8.4. Proprietary Serial Interfaces

    3.9. Multichannel ADCs

    3.10. Internal Microcontroller ADCs

    3.10.1. Reference Voltage

    3.11. Codecs

    3.12. Interrupt Rates

    3.13. Dual-Function Pins on Microcontrollers

    3.14. Design Checklist

    II. Measurement Systems

    Chapter 4. Overview of Measurement Systems

    4.1. Transducers

    4.2. Methods of Measurement

    4.3. Sensitivity

    4.4. Zero, Range, Linearity, and Span

    4.5. Resolution, Hysteresis, and Error

    4.6. Fourier Analysis

    4.7. Dynamic Response

    4.8. PID Control

    4.9. Accuracy and Repeatability

    4.10. Mechanical Models

    Chapter 5. Acceleration, Shock, and Vibration

    5.1. Introduction

    5.2. Technology Fundamentals

    5.2.1. Piezoelectric Accelerometer

    5.2.2. Piezoresistive Accelerometers

    5.2.3. Capacitive Accelerometers

    5.2.4. Servo or (Force Balance) Accelerometers

    5.3. Selecting and Specifying Accelerometers

    5.4. Applicable Standards

    5.5. Interfacing and Designs

    5.5.1. Stud Mounting

    5.5.2. Adhesive Mounting

    5.5.3. Magnetic Mounting

    5.5.4. Probe Tips

    5.5.5. Ground Isolation, Ground Noise, and Ground Loops

    5.5.6. Cables and Connections

    5.5.7. Latest and Future Developments

    5.6. Machinery Vibration Monitoring Sensors

    5.6.1. Introduction

    5.6.2. Technology Fundamentals

    5.6.3. Accelerometer Types

    5.6.4. Selecting Industrial Accelerometers

    5.6.5. Applicable Standards

    5.6.6. Latest and Future Developments

    5.6.7. Sensor Manufacturers

    References and Resources

    Chapter 6. Flow

    6.1. General

    6.2. Differential Pressure Flowmeters

    6.3. Turbine Flowmeters

    6.4. Vortex Shedding Flowmeters

    6.5. Electromagnetic Flowmeters

    6.6. Ultrasonic Flowmeters

    6.7. Hot Wire Anemometer

    6.8. Mass Flowmeters

    Chapter 7. Temperature

    7.1. Temperature Scales

    7.2. Types of Temperature Sensors

    7.3. Measurement Errors

    7.3.1. Calibration Errors

    7.3.2. Sensor Self-Heating

    7.3.3. Electrical Noise

    7.3.4. Mechanical Stress

    7.3.5. Thermal Coupling

    7.3.6. Sensor Time Constant

    7.3.7. Sensor Leads

    7.4. Selecting a Temperature Sensor

    7.5. Thermocouple Temperature Sensors

    7.5.1. Thermocouple Types

    7.6. RTD Temperature Sensors

    7.6.1. RTD Principles

    7.6.2. RTD Types

    7.6.3. RTD Temperature–Resistance Relationship

    7.6.4. RTD Standards

    7.6.5. Practical RTD Circuits

    7.6.6. Microcontroller-Based RTD Temperature Measurement

    7.7. Thermistor Temperature Sensors

    7.7.1. Thermistor Principles

    7.7.2. Thermistor Types

    7.7.3. Self-Heating

    7.7.4. Thermal Time Constant

    7.7.5. Thermistor Temperature–Resistance Relationship

    7.7.6. Practical Thermistor Circuits

    7.7.7. Microcontroller-Based Temperature Measurement

    7.8. Integrated Circuit Temperature Sensors

    7.8.1. Voltage Output Temperature Sensors

    7.8.2. Current Output Temperature Sensors

    7.8.3. Digital Output Temperature Sensors

    Chapter 8. Pressure

    8.1. Introduction

    8.2. SI and Other Units

    8.3. Absolute, Gauge, and Differential Pressure Modes

    8.4. Primary Standards

    8.4.1. Mercury Manometer

    8.4.2. Deadweight Pressure Tester

    8.5. Spinning Ball Gauge Standard

    8.6. Secondary Standards

    8.6.1. Capacitance Pressure Standard

    8.7. Working Standards

    8.7.1. Deadweight Pressure Tester

    8.7.2. Portable Pressure Standard (Pressure Calibrator)

    8.8. Pressure Measuring Instruments

    8.8.1. Liquid Column Instruments

    8.8.2. Mechanical Deformation Instruments

    8.8.3. Indirect Instruments

    8.9. Calibration of Pressure Standards and Instruments

    8.9.1. General Considerations

    8.9.2. Calibration of Working Standard Deadweight Pressure Testers

    8.9.3. Calibration of Vacuum Gauges

    Bibliography

    Chapter 9. Position

    9.1. Mechanical Switch

    9.2. Potentiometric Sensor

    9.3. Capacitive Transducer

    Examples

    9.4. LVDT

    9.5. Angular Velocity Transducer

    9.5.1. Toothed-Rotor Magnetic Tachometer

    9.5.2. Drag-Torque Tachometer (Motor Vehicle Speedometer)

    9.6. Position-Sensitive Diode Array

    9.7. Motion Control

    9.7.1. Rotary Encoder

    9.7.2. Linear encoder

    Chapter 10. Strain Gauges, Load Cells, and Weighing

    10.1. Introduction

    10.2. Stress and Strain

    10.3. Strain Gauges

    10.4. Bridge Circuits

    10.5. Load Cells

    10.6. Weighing Systems

    Chapter 11. Light

    11.1. Light

    11.2. Measuring Light

    11.3. Standards of Measurement

    11.4. Thermal Detectors

    11.5. Light-Dependent Resistor

    11.6. Photodiode

    11.7. Other Semiconductor Photodetectors

    11.8. Optical Detectors

    11.9. Photomultiplier

    III. Instrumentation Design Techniques for Test and Measurement

    Chapter 12. Signal Processing and Conditioning

    12.1. Conditioning Bridge Circuits

    12.1.1. Introduction

    12.1.2. Bridge Circuits

    12.1.3. Amplifying and Linearizing Bridge Outputs

    12.1.4. Driving Bridges

    12.2. Amplifiers for Signal Conditioning

    12.2.1. Introduction

    12.2.2. Precision Op-Amp Characteristics

    12.2.3. Amplifier DC Error Budget Analysis

    12.2.4. Single-Supply Op-Amps

    12.2.5. Instrumentation Amplifiers (In-Amps)

    12.2.6. Chopper-Stabilized Amplifiers

    12.2.7. Isolation Amplifiers

    References

    Chapter 13. Interfacing and Data Communications

    13.1. Interfacing

    12.2. Input/Output Ports

    13.3. Polling

    Interfacing in a Multitasking Environment

    13.4. Interrupts

    13.5. Direct Memory Access (DMA)

    13.6. Serial Port

    13.7. Serial Port Addresses

    13.8. Serial Port Registers

    13.9. Serial Port Registers and Interrupts

    13.10. Serial Port Baud Rate

    13.11. Serial Port Operation

    13.12. Parallel Printer Port

    13.13. Parallel Port Registers

    13.14. Parallel Printer Port Operation

    13.15. Communications

    13.16. Byte-to-Serial Conversion

    13.17. RS232 Interface

    13.18. Synchronization

    13.19. UART (6402)

    13.20. Line Drivers

    13.21. UART Clock

    13.22. UART Master Reset

    13.23. Null Modem

    13.24. Serial Port BIOS Services

    13.25. Serial Port Operation in BASIC

    13.26. Hardware Handshaking

    13.27. RS485

    13.28. GPIB

    13.29. USB

    13.30. TCP/IP

    Chapter 14. Data Acquisition Software

    14.1. An Overview of DA&C Software

    14.1.1. Program Configuration Routines

    14.1.2. Diagnostic Modules

    14.1.3. System Maintenance and Calibration Modules

    14.1.4. Run-Time Modules

    14.1.5. Drivers

    14.1.6. Data Analysis Modules

    14.2. Data Acquisition and Control in Real Time

    14.2.1. Requirements of Real-Time DA&C Systems

    14.2.2. Simple DA&C Systems

    14.2.3. Systems with More Stringent Timing Requirements

    14.2.4. Concurrent Processing

    14.2.5. Real-Time Design Considerations: A Brief Overview

    14.3. Implementing Real-Time Systems on the PC

    14.3.1. The BIOS

    14.3.2. Dos

    14.3.3. DOS Extenders and DPMI

    14.3.4. Microsoft Windows

    14.3.5. Other Desktop Operating Systems

    14.3.6. Other Real-Time Operating Systems

    14.3.7. Summary

    14.4. Robustness, Reliability, and Safety

    14.4.1. Software Production and Testing

    14.4.2. System Monitoring and Error Checks

    Chapter 15. Scaling and Calibration

    15.1. Scaling of Linear Response Curves

    15.1.1. The Offset

    15.1.2. Scaling from Known Sensitivities

    15.1.3. Two- and Three-Point Prime Calibration

    15.1.4. Accuracy of Prime Calibration

    15.1.5. Multiple-Point Prime Calibration

    15.1.6. Applying Linear Scaling Parameters to Digitized Data

    15.2. Linearization

    15.3. Polynomial Linearization

    15.3.1. Polynomial Least-Squares Fitting

    15.3.2. Linearization with Power-Series Polynomials

    15.3.3. Polynomials in Other Functions

    15.4. Interpolation between Points in a Lookup Table

    15.4.1. Sorting the Table of Calibration Points

    15.4.2. Searching the Lookup Table

    15.4.3. Interpolation

    15.5. Interpolation vs. Power-Series Polynomials

    15.6. Interactive Calibration Programs

    15.6.1. The User Interface

    15.7. Practical Issues

    15.7.1. Flexible Calibration Sequence

    15.7.2. Offset Correction

    15.7.3. Generating a Precise Measurand

    15.7.4. Remote Indication and Control

    15.7.5. Security

    15.7.6. Traceability

    Chapter 16. Synthetic Instruments

    16.1. What Is a Synthetic Instrument?

    16.2. History of Automated Measurement

    16.2.1. Genesis

    16.2.2. Modular Instruments

    16.3. Synthetic Instruments Defined

    16.3.1. Fundamental Definitions

    16.3.2. Synthesis and Analysis

    16.3.3. Generic Hardware

    16.3.4. Dilemma—All Instruments Created Equal

    16.4. Advantages of Synthetic Instruments

    16.4.1. Eliminating Redundancy

    16.4.2. Stimulus Response Closure: The Calibration Problem

    16.4.3. Measurement Integration

    16.4.4. Measurement Speed

    16.4.5. Longer Service Life

    16.5. Synthetic Instrument Misconceptions

    16.5.1. Why Not Just Measure Volts with a Voltmeter?

    16.5.2. Virtual Instruments

    16.5.3. Analog Instruments

    16.6. Synthetic Measurement System Hardware Architectures

    16.7. System Concept—The CCC Architecture

    16.7.1. Signal Flow

    16.7.2. The Synthetic Measurement System

    16.7.3. Chinese Restaurant Menu (CRM) Architecture

    16.7.4. Parameterization of CCC Assets

    16.7.5. Architectural Variations

    16.7.6. Compound Stimulus

    16.7.7. Simultaneous Channels and Multiplexing

    16.8. Hardware Requirements Traceability

    16.9. Stimulus

    16.10. Stimulus Digital-Signal Processing

    16.10.1. Waveform Playback

    16.10.2. Direct Digital Synthesis

    16.10.3. Algorithmic Sequencing

    16.10.4. Synthesis Controller Considerations

    16.11. Stimulus Triggering

    16.11.1. Stimulus Trigger Interpolation

    16.12. The Stimulus D/A

    16.12.1. Interpolation and Digital Up-Converters in the Codec

    16.13. Stimulus Conditioning

    16.13.1. Stimulus Conditioner Linearity

    16.13.2. Gain Control

    16.13.3. Adaptive Fidelity Improvement

    16.13.4. Reconstruction Filtering

    16.14. Stimulus Cascade—Real-World Example

    16.15. Real-World Design: A Synthetic Measurement System

    16.16. Universal High-Speed RF Microwave Test System

    16.16.1. Background

    16.16.2. Logistical Goals

    16.16.3. Technical Goals

    16.16.4. RF Capabilities

    16.17. System Architecture

    16.17.1. Microwave Synthetic Instrument (TRM1000C)

    16.17.2. Supplemental Resources

    16.18. DUT Interface

    16.18.1. Product Test Adapter Solutions

    16.19. Calibration

    16.19.1. Primary Calibration

    16.19.2. Operational Calibration

    16.20. Software Solutions

    16.20.1. Test Program Set Developer Interface

    16.20.2. TRM1000C Software

    16.21. Conclusions

    References

    Chapter 17. Real-World Measurement Applications

    17.1. Introduction

    17.2. Applications of Precision-Measurement Σ-Δ ADCs

    17.3. Weigh Scale Design Analysis Using the AD7730 ADC

    17.4. Thermocouple Conditioning Using the AD7793 ADC

    17.5. Direct Digital Temperature Measurements

    17.6. Microprocessor Substrate Temperature Sensors

    17.7. Applications of ADCs in Power Meters

    References

    IV. Circuit and Board Testing

    Chapter 18. Testing Methods

    18.1. The Order-of-Magnitude Rule

    18.2. A Brief (Somewhat Apocryphal) History of Test

    18.3. Test Options

    18.3.1. Analog Measurements

    18.3.2. Shorts-and-Opens Testers

    18.3.3. Manufacturing-Defects Analyzers

    18.3.4. In-Circuit Testers

    18.3.5. Bed-of-Nails Fixtures

    18.3.6. Bed-of-Nails Probe Considerations

    18.3.7. Opens Testing

    18.3.8. Other Access Issues

    18.3.9. Functional Testers

    18.3.10. Functional Tester Architectures

    18.3.11. Finding Faults with Functional Testers

    18.3.12. Two Techniques, One Box

    18.3.13. Hot Mockup

    18.3.14. Architectural Models

    18.3.15. Other Options

    18.4. Summary

    Chapter 19. Boundary Scan Techniques

    19.1. Latch-Scanning Arrangements

    19.2. Enter Boundary Scan

    19.3. Hardware Requirements

    19.4. Modes and Instructions

    19.5. Implementing Boundary Scan

    19.6. Partial-Boundary-Scan Testing

    19.6.1. Conventional Shorts Test

    19.6.2. Boundary-Scan Integrity Test

    19.6.3. Interactions Tests

    19.6.4. Interconnect Test

    19.7. Other Alternatives

    19.8. Summary

    Chapter 20. Inspection Test

    20.1. Striking a Balance

    20.2. Postpaste Inspection

    20.3. Postplacement/Postreflow

    20.3.1. Manual Inspection

    20.3.2. Automated Optical Inspection

    20.3.3. Design for Inspection

    20.3.4. Infrared Inspection—A New Look at an Old Alternative

    20.3.5. The New Jerusalem? X-Ray Inspection

    20.4. Summary

    V. EMC and RF Emissions Testing and Measurement

    Chapter 21. EMC Fundamentals

    21.1. What Is EMC?

    21.1.1. Portable Electronic Devices in Aircraft

    21.1.2. Interference to Medical Devices

    21.1.3. Thermostats

    21.1.4. The Quacking Duck

    21.2. Compatibility between and within Systems

    21.2.1. Intrasystem EMC

    21.2.2. Intersystem EMC

    21.2.3. When Intrasystem Meets Intersystem

    References

    Chapter 22. Measuring RF Emissions

    22.1. Emissions Measuring Instruments

    22.1.1. Measuring Receiver

    22.1.2. Spectrum Analyzer

    22.1.3. Receiver Specifications

    22.2. Transducers

    22.2.1. Antennas for Radiated Field

    22.2.2. LISNs and Probes for Cable Measurements

    22.2.3. Near-Field Probes

    22.2.4. The GTEM for Emissions Tests

    22.3. Sites and Facilities

    22.3.1. Radiated Emissions

    References

    Chapter 23. Test Methods

    23.1. Test Setup

    23.1.1. Layout

    23.1.2. Configuration

    23.2. Test Procedure

    23.2.1. Maximizing Emissions

    23.2.2. Fast Prescan

    23.3. Tests above 1 GHz

    23.3.1. Instrumentation and Antennas

    23.3.2. Methods

    23.4. Military Emissions Tests

    23.4.1. Instrumentation

    23.4.2. Transducers

    23.4.3. Test Site

    23.5. Measurement Uncertainty

    23.5.1. Applying Measurement Uncertainty

    23.5.2. Sources of Uncertainty

    References

    Chapter 24. Test Planning

    24.1. The Need for a Test Plan

    24.1.1. The Requirements of Accreditation

    24.1.2. The Requirements of Standards

    24.1.3. The Requirements of the Customer

    24.2. Contents of the Test Plan

    24.2.1. Description of the Equipment Under Test

    24.2.2. Statement of Test Objectives

    24.2.3. The Tests to Be Performed

    24.2.4. EUT Exercising Software and Ancillary Equipment or Simulators

    24.2.5. Requirements of the Test Facility

    24.2.6. Details of the Test Setup

    24.2.7. How to Evaluate Test Results

    24.3. Immunity Performance Criteria

    24.3.1. The Generic Criteria

    24.3.2. Interpreting the Generic Criteria

    References

    VI. Accelerated Testing

    Chapter 25. Accelerated Testing Fundamentals

    25.1. Scenario 1. A Key Physical Property Is Wrong

    25.2. Scenario 2. A Primary Failure Mode of a Product

    25.3. Scenario 3. The Mean Time to Failure

    Chapter 26. HALT and FMVT

    26.1. A Typical HALT

    26.1.1. Fixturing and Operation

    26.2. Hot Temperature Steps

    26.3. Cold Temperature Steps

    26.4. Ramp Rates

    26.5. Vibration

    26.6. Combined Run

    26.7. Business Structures

    26.8. Failure-Mode Verification Testing

    26.9. Development FMVT

    26.10. More about Stress

    26.11. What Can Break the Product?

    26.12. More about Failures

    26.13. More about Setup and Execution

    26.14. More on Data Analysis

    26.15. Comparison FMVT

    26.16. Method 1. Time to First Failure

    26.17. Method 2. Failure-Mode Progression Comparison

    26.18. FMVT Life Prediction—Equivalent Wear and Cycle Counting

    26.19. FMVT Warranty

    26.20. More on Vibration

    26.20.1. Displacement

    26.20.2. Velocity

    26.20.3. Acceleration

    26.21. Reliability and Design Maturity

    26.22. Business Considerations

    Chapter . Hall-Effect Sensors

    1. A Quantitative Examination

    2. Hall Effect in Metals

    3. The Hall Effect in Semiconductors

    4. A Silicon Hall-Effect Transducer

    5. Practical Transducers

    5.1. Key Transducer Characteristics

    5.2. Bulk Transducers

    5.3. Thin-Film Transducers

    5.4. Integrated Hall Transducers

    5.5. Transducer Geometry

    5.6. The Quad Cell

    5.7. Variations on the Basic Hall-Effect Transducer

    5.8. Examples of Hall Effect Transducers

    6. Transducer Interfacing

    6.1. An Electrical Transducer Model

    6.2. A Model for Computer Simulation

    6.3. Voltage-Mode Biasing

    6.4. Current-Mode Biasing

    6.5. Amplifiers

    6.6. Amplifier Circuits

    6.7. Analog Temperature Compensation

    6.8. Offset Adjustment

    6.9. Dynamic Offset Cancellation Technique

    References

    Chapter . Sampling

    1. Introduction

    1.1. Signal Types

    1.2. Elements of a DA&C System

    2. Digital I/O

    2.1. TTL-Level Digital Signals

    2.2. Digital Signal Conditioning and Isolation

    3. Sensors for Analog Signals

    3.1. Accuracy

    3.2. Dynamic Range

    3.3. Stability and Repeatability

    3.4. Linearity

    3.5. Response Times

    3.6. Susceptibility to Noise

    3.7. Some Common Sensors

    3.8. Digital Sensors and Encoders

    3.9. Resistive Temperature Sensors (Thermistors and RTDs)

    3.10. Resistance Sensors and Bridges

    4. Handling Analog Signals

    4.1. Signal Conditioning

    4.2. Multiplexers

    5. Digitization and Signal Conversion

    5.1. Binary Coding

    5.2. Encoding Unipolar Signals

    5.3. Analog-to-Digital Converters

    5.4. ADC Characteristics and Errors

    6. Analog Measurements

    6.1. Accuracy

    6.2. Amplification and Extending Dynamic Range

    6.3. Throughput

    7. Timers and Pacing

    7.1. Watchdog Timers

    8. Sampling, Noise, and Filtering

    9. Sampling and Aliasing

    9.1. The Importance of Sampling Rate

    9.2. Nyquist' Sampling Theorem

    9.3. Aliasing

    9.4. Sampling Accuracy

    9.5. Reconstruction of Sampled Signals

    9.6. Selecting the Optimum Sampling Rate

    10. Noise and Filtering

    10.1. Designing Simple Digital Filters

    10.2. Software Considerations

    10.3. Testing Digital Filters

    10.4. Simple Averaging Techniques

    10.5. Low-Pass Filtering Techniques

    10.6. Unweighted Moving Average Filter

    10.7. Exponentially Weighted FIFO Filter

    10.8. Recursive Low-Pass Filter

    References

    Chapter . Wireless Systems

    Introduction to Wireless Sensor Networks

    Individual Wireless Sensor Node Architecture

    Wireless Sensor Networks Architecture

    Star Network (Single Point to Multipoint)

    Mesh Network

    Hybrid Star-Mesh Network

    Radio Options for the Physical Layer in Wireless Sensor Networks

    IEEE802.11x

    Bluetooth (IEEE802.15.1 and .2)

    IEEE 802.15.4

    ZigBee

    IEEE1451.5

    Power Consideration in Wireless Sensor Networks

    Applications of Wireless Sensor Networks

    Structural Health Monitoring—Smart Structures

    Industrial Automation

    Application Highlight—Civil Structure Monitoring

    Future Developments

    Conclusion

    Acknowledgment

    Copyright

    Newnes is an imprint of Elsevier

    30 Corporate Drive, Suite 400, Burlington, MA 01803, USA

    Linacre House, Jordan Hill, Oxford OX2 8DP, UK

    Copyright © 2009, Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher.

    Permissions may be sought directly from Elsevier's Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, E-mail: permissions@elsevier.com. You may also complete your request online via the Elsevier homepage (http://www.elsevier.com), by selecting Support & Contact then Copyright and Permission and then Obtaining Permissions.

    Recognizing the importance of preserving what has been written, Elsevier prints its books on acid-free paper whenever possible.

    Library of Congress Cataloging-in-Publication Data

    Application submitted.

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library.

    ISBN: 978-1-85617-530-2

    For information on all Newnes publications,

    visit our web site at: www.books.elsevier.com

    Printed in the United States of America.

    08 09 10 11 12 13 6 5 4 3 2 1

    About the Authors

    Steven Arms (Online Chapter: Wireless Systems) is a contributor to Sensor Technology Handbook. Mr. Arms received his Master's Degree in Mechanical Engineering at the University of Vermont in 1983. He has been awarded 25 US patents and has over 10 pending. He has contributed to 18 journal publications, as well as 44 abstracts/presentations, in areas of advanced instrumentation, wireless sensing, and energy harvesting. Mr. Arms is founder and President of MicroStrain, Inc., a Vermont manufacturer of micro-displacement sensors, inertial sensing systems, and wireless data logging nodes for recording and transmitting strain, vibration, temperature, and orientation data. MicroStrain has been recognized as an innovator in the sensors industry. As of 2008, the firm has received nine Best of Sensors Expo Gold awards for its new products. MicroStrain has received funding from the U.S. Navy and the U.S. Army to develop wireless sensor networks which use strain energy harvesting to eliminate battery maintenance.

    Craig Aszkler (Chapter 5) is a contributor to Sensor Technology Handbook. Craig is a Vibration Products Division Manager at PCB Piezotronics, Inc.

    Stuart Ball, P.E. (Chapter 3 and the Appendix), author of Analog Interfacing to Embedded Microprocessor Systems, is an electrical engineer with over 20 years of experience in electronic and embedded systems. He is currently employed with Seagate Technologies, a manufacturer of computer hard disc drives.

    Dr. G.M.S de Silva (Chapters 1 and 8), author of Basic Metrology for ISO9000 Certification, is a Chartered Engineer (C.Eng) and holds degrees in Physics, B.Sc (Hons), from the University of Ceylon; Instrument Technology, M.Sc, from the University of Technology, Loughborough; and Electrical Materials, Ph.D, from the Imperial College of Science, Medicine and Technology, University of London, where he was also employed as a researcher in surface roughness metrology. Dr. de Silva has held senior positions in the fields of metrology, standardization, and quality management in a number of national and international organizations during the past 35 years. At present, he is the Chief Technical Advisor for a number of United Nations Industrial Development Organization (UNIDO) projects for development of metrology and standardization of infrastructure in the South Asian and South East Asian region, (Bangladesh, Nepal, Thailand, Vietnam, Cambodia, and Lao PDR). His past experiences include Associate Professor and Manager, Measurement Standards Laboratory, Research Institute, King Fahd University of Petroleum & Minerals, Dhahran, Saudi Arabia; Director General of Sri Lanka Standards Institution, the national standards body of Sri Lanka; UNIDO international metrology consultant to the African Regional Organization for Standardization (ARSO); and Commonwealth Fund for Technical Co-operation consultant in metrology to Grenada Bureau of Standards, Grenada. He is a long standing member of the Institute of Measurement and Control, United Kingdom, and a member of a number of other technical societies.

    Tony Fischer-Cripps (Chapters 4, 9, 11, and 13) is the author of Newnes Interfacing Companion. Tony is a Project Leader in the Division of Telecommunications and Industrial Physics of the Commonwealth Scientific & Industrial Research Organization (CSIRO), Australia. He was previously lecturer, University of Technology, Sydney, Australia, and has also worked for the National Institute of Standards and Technology, USA.

    Timothy Geiger (Chapter 5) is a contributor to Sensor Technology Handbook. Timothy graduated with a BBA from the University of Notre Dame in 1987 and a MBA from the University of Chicago in 1992. He originally worked in various accounting and financial positions prior to joining PCB Piezotronics, Inc. Currently, Tim holds the role of Division Manager for the Industrial Sensors Division.

    Dr. Prof. Dogan Ibrahim (Chapter 7) is the author of Microcontroller-Based Temperature Monitoring Control. He is currently the Head of the Computer Engineering Department at the Near East University in Cyprus. He is the author of over 50 technical books and 200 technical articles. Prof. Ibrahim's interests are in the field of microprocessor and microcontroller based automatic control, digital signal processing, and distant engineering education.

    Kevin James (Chapters 14 and 15 and Online Chapter: Sampling), author of PC Interfacing and Data Acquisition, has a background in Astrophysics and Applied Nuclear Science. As a research physicist he designed numeric models for applications as diverse as radiometric thermometry and high-energy neutron dosimetry. He also spent much of his time developing computer-based laboratory instrumentation for use in geological and archaeological research. Since 1988, Dr. James has specialized in producing data-acquisition software for a broad range of manufacturing and quality control applications. He has designed test and measurement systems for the aircraft, automobile, rail, process-engineering, and civil-engineering sectors and has developed interfacing software for the PC as well as real-time firmware for a variety of embedded systems. He has also designed and coded a number of Internet-based applications. Dr. James is a Fellow of the Institution of Analysts and Programmers and has been a freelance consultant since 1991. He has published numerous academic papers and written extensively on the subjects of data acquisition, control, and interfacing. He is currently engaged in database design.

    Thomas Kenny (Chapter 2) is a contributor to Sensor Technology Handbook. Thomas has always been interested in the properties of small structures. His Ph.D research was carried out in the Physics Department at UC Berkeley where he focused on a measurement of the heat capacity of a monolayer of helium atoms. After graduating, his research at the Jet Propulsion Laboratory focused on the development of a series of microsensors that use tunneling displacement transducers to measure small signals. Currently, at Stanford University, research in Tom's group covers many areas including MEMS devices to detect small forces, studies of gecko adhesion, micromechanical resonators, and heat transfer in microchannels. Tom teaches several courses at Stanford, including Introduction to Sensors. Tom's hobbies include Ultimate Frisbee, hiking, skiing, and an occasional friendly game of poker.

    Walt Kester (Chapters 12 and 17) is a corporate staff applications engineer at Analog Devices. In his more than 35 years at Analog Devices, he has designed, developed, and given applications support for high-speed ADCs, DACs, SHAs, op amps, and analog multiplexers. Besides writing many papers and articles, he prepared and edited eleven major applications books which form the basis for the Analog Devices world-wide technical seminar series including the topics of op amps, data conversion, power management, sensor signal conditioning, and mixed-signal and practical analog design techniques. He is also the editor of The Data Conversion Handbook, a 900+ page comprehensive book on data conversion, published in 2005 by Elsevier. Walt has a BSEE from NC State University and a MSEE from Duke University.

    Professor Michael Laughton (Chapters 6 and 10) BASc, (Toronto), Ph.D (London), DSc.Eng (London), FR.Eng, FIEE, C.Eng, is the editor of Electrical Engineer's Reference Book, 16th Edition. He is Emeritus Professor of Electrical Engineering and former Dean of Engineering of the University of London and Pro-Principal of Queen Mary and Westfield College. He is currently the UK representative on the Energy Committee of the European National Academies of Engineering and a member of the energy and environment policy advisory groups of the Royal Academy of Engineering, the Royal Society, the Institution of Electrical Engineers, and the Power Industry Division Board of the Institution of Mechanical Engineers. He has acted as Specialist Adviser to UK Parliamentary Committees in both upper and lower Houses on alternative and renewable energy technologies and on energy efficiency. He was awarded The Institution of Electrical Engineers Achievement Medal in 2002 for sustained contributions to electrical power engineering.

    Chris Nadovich (Chapter 16) is the author of Synthetic Instrumentation. Chris is a working engineer with over 20 years of experience in the design and development of advanced instrumentation for RF and microwave test. He owns a private consulting company, Julia Thomas Associates that is involved in many electronic automated test-related design and development efforts at the forefront of the Synthetic Instrumentation revolution. In addition to his hardware engineering work, Nadovich is an accomplished software engineer. He owns and manages an Internet provider company, JTAN.COM. Nadovich received BSEE and MEEE degrees from Rensselaer Polytechnic Institute in 1981. While working in industry as an engineer, he was also a competitive bicycle racer. In 1994, Nadovich united his skills as an engineer with his love for bicycle racing when he designed the 250 meter velodrome used for the 1996 Olympics in Atlanta. He currently resides in Sellersville, PA along with his wife, Joanne, and their two children.

    E.A. Parr (Chapters 6 and 10) M.Sc, C.Eng, MIEE, MInstMC CoSteel Sheerness, is a contributor to Electrical Engineer's Reference Book, 16th Edition.

    Alex Porter (Chapters 25 and 26) is the author of Accelerated Testing and Validation. Alex is the Engineering Development Manager for Entela, Inc. and has been with the company since 1992. Since 1996, he has been developing accelerated testing methods for mechanical components and systems. Alex has three patents related to accelerated testing equipment and has published over thirty articles, technical papers, and presentations on accelerated testing. Alex is chairing an SAE committee that is writing an Accelerated Testing Supply Chain Implementation Guide. His work in the past has included implementation of FEA in a laboratory setting and development of a thermal management system for an advanced data acquisition package developed by NASA's Drydon Flight Research facility. Alex is a member of SAE and IEEE. He holds a B.S. in Aircraft Engineering and an M.S. in Mechanical Engineering, both from Western Michigan University.

    Edward Ramsden, BSEE, (Online Chapter: Hall-Effect Sensors) is the author of Hall-Effect Sensors. Ed has worked with Hall-effect sensors since 1988. His experiences in this area include designing sensor integrated circuits and assembly-level products as well as developing novel magnetic processing techniques. He has authored or co-authored more than 30 technical papers and articles and holds ten U.S. patents in the areas of electronics and sensor technology.

    Steve Scheiber (Chapters 18, 19, and 20), author of Building a Successful Board-Test Strategy and Principal of ConsuLogic Consulting Services, has spent more than 30 years exploring electronics manufacturing and test issues at all levels. A noted author and lecturer, Steve has served as Contributing Technical Editor and Senior Technical Editor for Test & Measurement World, for more than 25 years, and as Editor of Test & Measurement Europe. His textbook, Building a Successful Board-Test Strategy, published by Butterworth-Heinemann, is now in its second edition. Steve wrote companion books, published by Quale Press, including A Six-Step Economic-Justification Process for Tester Selection, Economically Justifying Functional Test, and Building an Intelligent Manufacturing Line (all of which are available directly from ConsuLogic). He has also written hundreds of technical articles for a variety of trade publications. Steve's areas of expertise include manufacturing and test strategy development, test and general-purpose software, economics, and test-program management. Steve has spent much time in the past 15 years teaching seminars on economics and cost-justification of capital expenditures to engineers and managers. His other seminar and technical-article subjects have included automatic-program generation, concurrent engineering, design-for-testability, simulation, device and board verification, inspection, environmental stress screening, and VXI. Steve holds Bachelor's and Master-of-Engineering degrees from Rensselaer Polytechnic Institute.

    Chris Townsend (Online Chapter: Wireless Systems) is a contributor to Sensor Technology Handbook. He is Vice President of Engineering for MicroStrain, Inc., a manufacturer of precision sensors and wireless sensing instrumentation. Chris's current main focus is on research and development of a new class of ultra low power wireless sensors for industry. Chris has been involved in the design of a number of products, including the world's smallest LVDT, inertial orientation sensors, and wireless sensors. He holds over 25 patents in the area of advanced sensing. Chris has a degree in Electrical Engineering from the University of Vermont.

    Douglas Warne (Chapters 6 and 10) is the editor of Electrical Engineers Reference book, 16th Edition. Warne graduated from Imperial College London in 1967 with a 1st-class honors degree in electrical engineering. During 1963–1968 he had a student apprenticeship with AEI Heavy Plant Division, Rugby. He is currently self-employed, and has taken on such projects as Coordinated LINK PEDDS program for DTI and the electrical engineering, electrical machines and drives, and ERCOS programs for EPSRC. Warne initiated and managed the NETCORDE university-industry network for identifying and launching new R&D projects. He also acted as coordinator for the industry-academic funded ESR Network, held the part-time position of Research Contract Coordinator for the High Voltage and Energy Systems group at University of Cardiff, and monitored several projects funded through the DTI Technology Program.

    Tim Williams (Chapters 21, 22, 23, and 24) is the author of The Circuit Designer's Companion, 2nd Edition. He works at Elmac Services which provides consultancy and training on all aspects of EMC, including design, testing, and the application of standards to companies manufacturing electronic products and concerned about the implications of the EMC Directive. Tim Williams gained a B.Sc in Electronic Engineering from Southampton University in 1976. He has worked in electronic product design in various industry sectors including process instrumentation and audio visual control. He was design group leader at Rosemount Ltd. before leaving in 1990 to start Elmac Services. He is also the author of EMC for Product Designers (now in its fourth edition, Elsevier 2006), and has presented numerous conference papers and seminars. He is also author of EMC for Systems & Installations with Keith Armstrong. He is an EMC technical assessor for UKAS and SWEDAC.

    Jon Wilson, (Chapters 2, 5, 12, and Online Chapter: Wireless Systems) is editor of Sensor Technology Handbook; Test Engineer, Chrysler Corporation; Test Engineer, ITT Cannon Electric Co.; Environmental Lab Manager, Motorola Semiconductor Products; Applications Engineering Manager and Marketing Manager, Endevco Corporation; Principal Consultant, Jon S. Wilson Consulting, LLC; Fellow Member, Institute of Environmental Sciences and Technology; Sr. Member, ISA; and Sr. Member, SAE. He has authored several text books and short course handbooks on testing and instrumentation and many magazine articles and technical papers. He is a regular presenter of measurement and testing short courses for Endevco, Technology Training, Inc., the International Telemetry Conference, and commercial and government organizations.

    Part I. Measurement Technology and Techniques

    Chapter 1. Fundamentals of Measurement

    1.1. Introduction

    Metrology, or the science of measurement, is a discipline that plays an important role in sustaining modern societies. It deals not only with the measurements that we make in day-to-day living, such as at the shop or the petrol station, but also in industry, science, and technology. The technological advancement of the present-day world would not have been possible if not for the contribution made by metrologists all over the world to maintain accurate measurement systems.

    The earliest metrological activity has been traced back to prehistoric times. For example, a beam balance dated to 5000 bc has been found in a tomb in Nagada in Egypt. It is well known that Sumerians and Babylonians had well-developed systems of numbers. The very high level of astronomy and advanced status of time measurement in these early Mesopotamian cultures contributed much to the development of science in later periods in the rest of the world. The colossal stupas (large hemispherical domes) of Anuradhapura and Polonnaruwa and the great tanks and canals of the hydraulic civilization bear ample testimony to the advanced system of linear and volume measurement that existed in ancient Sri Lanka.

    There is evidence that well-established measurement systems existed in the Indus Valley and Mohenjo-Daro civilizations. In fact the number system we use today, known as the Indo-Arabic numbers, with positional notation for the symbols 1–9 and the concept of zero, was introduced into western societies by an English monk who translated the books of the Arab writer Al-Khawanizmi into Latin in the 12th century.

    In the modern world metrology plays a vital role to protect the consumer and to ensure that manufactured products conform to prescribed dimensional and quality standards. In many countries the implementation of a metrological system is carried out under three distinct headings or services, namely, scientific, industrial, and legal metrology.

    Industrial metrology is mainly concerned with the measurement of length, mass, volume, temperature, pressure, voltage, current, and a host of other physical and chemical parameters needed for industrial production and process control. The maintenance of accurate dimensional and other physical parameters of manufactured products to ensure that they conform to prescribed quality standards is another important function carried out by industrial metrology services.

    Industrial metrology thus plays a vital role in the economic and industrial development of a country. It is often said that the level of industrial development of a country can be judged by the status of its metrology.

    1.2. Fundamental Concepts

    The most important fundamental concepts of measurement, except the concepts of uncertainty of measurement, are explained in this section.

    1.2.1. Measurand and Influence Quantity

    The specific quantity determined in a measurement process is known as the measurand. A complete statement of the measurand also requires specification of other quantities, for example, temperature, pressure, humidity, and the like, that may affect the value of the measurand. These quantities are known as influence quantities.

    For example, in an experiment performed to determine the density of a sample of water at a specific temperature (say 20° C), the measurand is the density of water at 20° C. In this instance the only influence quantity specified is the temperature, namely, 20° C.

    1.2.2. True Value (of a Quantity)

    The true value of a quantity is defined as the value consistent with its definition. This implies that there are no measurement errors in the realization of the definition. For example, the density of a substance is defined as mass per unit volume. If the mass and volume of the substance could be determined without making measurement errors, then the true value of the density can be obtained. Unfortunately, in practice, both these quantities cannot be determined without experimental error. Therefore the true value of a quantity cannot be determined experimentally.

    1.2.3. Nominal Value and Conventional True Value

    The nominal value is the approximate or rounded-off value of a material measure or characteristic of a measuring instrument. For example, when we refer to a resistor as 100Ω or to a weight as 1 kg, we are using their nominal values. Their exact values, known as conventional true values, may be 99.98Ω and 1.0001 kg, respectively. The conventional true value is obtained by comparing the test item with a higher-level measurement standard under defined conditions. If we take the example of the 1-kg weight, the conventional true value is the mass value of the weight as defined in the OIML (International Organization for Legal Metrology) International Recommendation RI 33, that is, the apparent mass value of the weight, determined using weights of density 8000 kg/m³ in air of density 1.2 kg/m³ at 20° C with a specified uncertainty figure. The conventional value of a weight is usually expressed in the form 1.001 g ± 0.001 g.

    1.2.4. Error and Relative Error of Measurement

    The difference between the result of a measurement and its true value is known as the error of the measurement. Since a true value cannot be determined, the error, as defined, cannot be determined as well. A conventional true value is therefore used in practice to determine an error.

    The relative error is obtained by dividing the error by the average of the measured value. When it is necessary to distinguish an error from a relative error, the former is sometimes called the absolute error of measurement. As the error could be positive or negative of another term, the absolute value of the error is used to express the magnitude (or modulus) of the error.

    As an example, suppose we want to determine the error of a digital multimeter at a nominal voltage level of 10V DC. The multimeter is connected to a DC voltage standard supplying a voltage of 10V DC and the reading is noted down. The procedure is repeated several times, say five times. The mean of the five readings is calculated and found to be 10.2V.

    The error is then calculated as 10.2 – 10.0 = +0.2V. The relative error is obtained by dividing 0.2V by 10.2V, giving 0.02. The relative error as a percentage is obtained by multiplying the relative error (0.02) by 100; that is, the relative error is 0.2% of the reading.

    In this example a conventional true value is used, namely, the voltage of 10V DC supplied by the voltage standard, to determine the error of the instrument.

    1.2.5. Random Error

    The random error of measurement arises from unpredictable variations of one or more influence quantities. The effects of such variations are known as random effects. For example, in determining the length of a bar or gauge block, the variation in temperature of the environment gives rise to an error in the measured value. This error is due to a random effect, namely, the unpredictable variation in the environmental temperature. It is not possible to compensate for random errors. However, the uncertainties arising from random effects can be quantified by repeating the experiment a number of times.

    1.2.6. Systematic Error

    An error that occurs due to a more or less constant effect is a systematic error. If the zero of a measuring instrument has been shifted by a constant amount this would give rise to a systematic error. In measuring the voltage across a resistance using a voltmeter, the finite impedance of the voltmeter often causes a systematic error. A correction can be computed if the impedance of the voltmeter and the value of the resistance are known.

    Often, measuring instruments and systems are adjusted or calibrated using measurement standards and reference materials to eliminate systematic effects. However, the uncertainties associated with the standard or the reference material are incorporated in the uncertainty of the calibration.

    1.2.7. Accuracy and Precision

    The terms accuracy and precision are often misunderstood or confused. The accuracy of a measurement is the degree of its closeness to the true value. The precision of a measurement is the degree of scatter of the measurement result, when the measurement is repeated a number of times under specified conditions.

    In Figure 1.1 the results obtained from a measurement experiment using a measuring instrument are plotted as a frequency distribution. The vertical axis represents the frequency of the measurement result and the horizontal axis represents the values of the results (X). The central vertical line represents the mean value of all the measurement results. The vertical line marked T represents the true value of the measurand. The difference between the mean value and the T line is the accuracy of the measurement. The standard deviation (marked σx) of all the measurement results about the mean value is a quantitative measure for the precision of the measurement.

    Figure 1.1. Accuracy, precision, and true value.

    Unfortunately the accuracy defined in this manner cannot be determined, as the true value (T) of a measurement cannot be obtained due to errors prevalent in the measurement process. The only way to obtain an estimate of accuracy is to use a higher-level measurement standard in place of the measuring instrument to perform the measurement and use the resulting mean value as the true value. This is what is usually done in practice. The line (S) represents the mean value obtained using a higher level measurement standard.

    Thus accuracy figures quoted by instrument manufacturers in their technical literature is the difference between the measurement result displayed by the instrument and the value obtained when a higher-level measurement standard is used to perform the measurement. In the case of simple instruments, the accuracy indicated is usually the calibration accuracy; for example, in the calibration of a micrometer, a series of gauge blocks is used. If the values displayed by the micrometer over its usable range falls within ±0.01 mm of the values assigned to the gauge blocks, then the accuracy of the micrometer is reported as ±0.01 mm.

    It can be seen that the definition of error given previously (Section 1.2.4) is very similar to the definition of accuracy. In fact error and accuracy are interchangeable terms. Some prefer to use the term error and others prefer accuracy. Generally instrument manufacturers prefer the term accuracy, as they do not wish to highlight the fact that their instruments have errors.

    Relative accuracy and percent of relative accuracy are also concepts in use. The definitions of these are similar to those of relative error and percent of relative error; that is, relative accuracy is obtained by dividing accuracy by the average measured result, and percent of relative accuracy is computed by multiplying relative accuracy by 100.

    1.2.8. Calibration

    Calibration is the process of comparing the indication of an instrument or the value of a material measure (e.g., value of a weight or graduations of a length-measuring ruler) against values indicated by a measurement standard under specified conditions. In the process of calibration of an instrument or material measure, the test item is either adjusted or correction factors are determined.

    Not all instruments or material measures are adjustable. In case the instrument cannot be adjusted, it is possible to determine correction factors, although this method is not always satisfactory due to a number of reasons, the primary one being the nonlinearity in the response of most instruments.

    For an example take the calibration of a mercury-in-glass thermometer between 0° C and 100° C; say the calibration was carried out at six test temperatures: 0° C, 20° C, 40° C, 60° C, 80° C, and 100° C. Corrections are determined for each test temperature by taking the difference of readings between the test thermometer and the reference thermometer used for the calibration. These corrections are valid only at the temperatures of calibration. The corrections at intermediate temperatures cannot be determined by interpolation; for example, the correction for 30° C cannot be determined by interpolating the corrections corresponding to 20° C and 40° C.

    In the case of material measures, for example a test weight, either determination of the conventional mass value or adjustment of the mass value (in adjustable masses only) by addition or removal of material is performed. However, in the case of many other material measures, such as meter rulers, gauge blocks, or standard resistors, adjustment is not possible. In such cases the conventional value of the item is determined.

    Some instruments used for measurement of electrical parameters are adjustable, for example, multimeters, oscilloscopes, and function generators.

    1.2.9. Hierarchy of Measurement Standards

    Measurement standards are categorized into different levels, namely, primary, secondary, and working standards, forming a hierarchy. Primary standards have the highest metrological quality and their values are not referenced to other standards of the same quantity. For example, the International Prototype kilogram maintained at the International Bureau of Weights and Measures (BIPM) is the primary standard for mass measurement. This is the highest level standard for mass measurement and is not referenced to any further standard.

    A secondary standard is a standard whose value is assigned by comparison with a primary standard of the same quantity. The national standard kilograms maintained by many countries are secondary standards, as the value of these kilograms is determined by comparison to the primary standard kilogram maintained at the BIPM.

    A standard that is used routinely to calibrate or check measuring instruments or material measures is known as a working standard. A working standard is periodically compared against a secondary standard of the same quantity. For example, the weights used for calibration of balances and other weights are working standards.

    The terms national primary standard, secondary standard, and tertiary standard are used to describe the hierarchy of national measurement standards maintained in a given country. Here the term primary standard is used in the sense that it is the highest level standard maintained in a given country for a particular quantity. This standard may or may not be a primary standard in terms of the metrological hierarchy described in the previous paragraph; for example, many countries maintain an iodine-stabilized helium neon laser system for realization of the national meter. This is a case of a metrological primary standard being used as a national primary standard. On the other hand, as pointed out earlier, the kilogram maintained by most countries as a national primary standard of mass is only a secondary standard in the metrological hierarchy of standards.

    Usually the national hierarchy scheme is incorporated in the metrology law of the country.

    A measurement standard recognized by international agreement to serve internationally as the basis for assigning values to other standards of the quantity concerned is known as an international standard.

    The primary reason for establishing a hierarchy scheme is to minimize the use and handling of the higher level standards and thus to preserve their values. Therefore the primary, secondary, and working standards are graded in uncertainty, the primary standards having the best uncertainty and the working standards the worst uncertainty. Figure 1.2 depicts the two hierarchies of measurement standards.

    Figure 1.2. Hierarchies of measurement standards.

    1.2.10. Traceability

    The concept of traceability is closely related to the hierarchy of standards. For a particular measurement standard or measuring instrument, traceability means that its value has been determined by an unbroken chain of comparisons with a series of higher-level standards with stated uncertainties. The higher-level standards may be national standards maintained in a given country or international standards maintained by the International Bureau of Weights and Measures or any other laboratory.

    Recently this fundamental definition has been modified by the addition of a time requirement for the comparisons. It is true that, if the comparisons are widely separated in time, traceability may be lost. For example, a load cell fitted in a tensile testing machine may lose traceability after about one year from its last comparison. Therefore the traceability of a test or measuring instrument depends largely on the type of instrument, the time interval from the last comparison, and to some extent, on the uncertainty of the instrument. Due to these reasons laboratory accreditation bodies such as the United Kingdom Accreditation Service (UKAS) and the National Association of Testing Authorities (NATA) of Australia have formulated specific guidelines for traceability of measurement standards and test and measuring equipment used in laboratories seeking accreditation.

    1.2.11. Test Uncertainty Ratio

    Calibration of test and measurement equipment is always done against a higher-level measurement standard, usually a working standard. The ratio of the uncertainty of the test item to that of the measurement standard used in the calibration is known as the test uncertainty ratio (TUR). In most calibrations a TUR of at least 1:5 is used, though in some circumstances, especially when the test item has a relatively small uncertainty, a lesser TUR (1:2 or sometimes 1:1) has to be used. Nowadays it is more usual to determine the TUR as the ratio of the combined uncertainty (uncertainty budget) of the result obtained by the measurement standard to that obtained by the test item.

    Let us look at an example. Say a pressure gauge of 0 to 1500 kPa (absolute) range is to be calibrated to an uncertainty of ±100 Pa. With a TUR of 1:5, the measurement standard to be used for this calibration should have an uncertainty of not more than 100/5 Pa, that is, ±20 Pa. A working standard deadweight pressure tester having an uncertainty of less than ±20 Pa would meet the criterion.

    1.2.12. Resolution, Discrimination, and Sensitivity

    The resolution, discrimination, and sensitivity of an instrument are closely related concepts. The resolution of a measuring instrument is the smallest difference between two indications of its display. For analog instruments this is the smallest recognizable division on the display. For example, if the smallest graduation on a thermometer corresponds to 0.1° C, the resolution of the thermometer is 0.1° C. For a digital displaying device, this is the change in the indication when the least significant digit changes by one step; for example, the resolution of a weighing balance indicating to two decimal places in grams is 0.01 g.

    Discrimination, on the other hand, is the ability of an instrument to respond to small changes of the stimulus. It is defined as the largest change in a stimulus that produces no detectable change in the response of the measuring instrument. For example, if a mercury-in-glass thermometer is used to measure the temperature of an oil bath whose temperature is rising gradually, the smallest temperature change able to be registered by the thermometer will be its discrimination. This will not necessarily equal to the resolution of the instrument. Generally, in a good quality instrument, the discrimination should be smaller than its resolution.

    The sensitivity of an instrument is the numerical quantity that represents the ratio of the change in response to that of the change in the stimulus. This usually applies to instruments that do not have an output or response in the same units as that of the input or stimulus. A common example in a metrology laboratory is the equal arm balance. The input (stimulus) to the balance is the difference of mass between the two pans. The output is the angle of inclination of the balance beam at rest. Thus, to relate the mass difference corresponding to a change in the angle of inclination of the balance beam, we need to determine the sensitivity of the balance. In the case of beam balances this is known as the sensitivity reciprocal.

    1.2.13. Tolerance

    Tolerance is the maximum allowable deviation of the value of a material measure or the indication of a measuring instrument. In most cases tolerances are specified by national regulations or standard specifications. For example, the OIML International Recommendation RI 111 gives tolerances for weights of different classes used for metrological purposes.

    1.2.14. Repeatability of Measurements

    The repeatability of a measuring instrument or measurement operation is defined as the closeness of the agreement between the results of successive measurements carried out under the same conditions of measurement within a relatively short interval of time. The repeatability conditions include the measurement procedure, the observer, the environmental conditions, and location. Repeatability is usually expressed quantitatively as a standard deviation of the measurement result.

    A familiar example is the repeatability of a weighing balance, which is determined by weighing a mass a number of times under similar conditions within a short interval of time. The standard deviation of the balance indications is expressed as the repeatability of the balance.

    1.2.15. Reproducibility of Measurements

    The reproducibility of a measurement process is the closeness of the agreement between the results of a measurement carried out under changed conditions of measurement. The changed conditions may include the principle of measurement, the method of measurement, the observer, the measuring instrument, the reference standards used, location where the measurement is performed, and so forth.

    Reproducibility is rarely computed in metrology, though this concept is widely used and very useful in chemical and physical testing. Usually repeatability and reproducibility of a test procedure are determined by conducting a statistically designed experiment between two laboratories (or two sets of conditions) and by performing a variance analysis of the test results. The variance (the square of the standard deviation) attributable to variation within a laboratory (or a set of conditions) is expressed as repeatability and that between the laboratories is expressed as reproducibility. These experiments are usually known as R&R (repeatability and reproducibility) studies.

    Bibliography

    International Organization for Standardization, 1993 International Organization for Standardization International vocabulary of basic and general terms in metrology 1993

    International Organization of Legal Metrology International Organization of Legal Metrology. International Recommendation RI 33-1979. Conventional value of the result of weighing in air.

    International Organization of Legal Metrology International Organization of Legal Metrology. International Recommendation RI 111-1994. Weights of classes E1, E2, F1, F2, M1, M2, M3. International Organization of Legal Metrology.

    De Vries, 1995 S. De Vries, Make traceable calibration understandable in the industrial world Proceedings of the Workshop on The Impact of Metrology on Global Trade, National Conference of Standards Laboratories 1995

    Sommer et al., 2001 K. Sommer, S.E. Chappell, M. Kochsiek, Calibration and verification, two procedures having comparable objectives Bulletin of the International Organization of Legal Metrology 17 20011-

    Ehrlich and Rasberry, 1998 C.D. Ehrlich, S.D. Rasberry, Metrological timelines in traceability Journal of Research of the National Institute of Standards and Technology 103 199893-

    Chapter 2. Sensors and Transducers

    2.1. Basic Sensor Technology

    A sensor is a device that converts a physical phenomenon into an electrical signal. As such, sensors represent part of the interface between the physical world and the world of electrical devices, such as computers. The other part of this interface is represented by actuators, which convert electrical signals into physical phenomena.

    Why do we care so much about this interface? In recent years, enormous capability for information processing has been developed within the electronics industry. The most significant example of this capability is the personal computer. In addition, the availability of inexpensive microprocessors is having a tremendous impact on the design of embedded computing products ranging from automobiles to microwave ovens to toys. In recent years, versions of these products that use microprocessors for control of functionality are becoming widely available. In automobiles, such capability is necessary to achieve compliance with pollution restrictions. In other cases, such capability simply offers an inexpensive performance advantage.

    All of these microprocessors need electrical input voltages in order to receive instructions and information. So, along with the availability of inexpensive microprocessors has grown an opportunity for the use of sensors in a wide variety of products. In addition, since the output of the sensor is an electrical signal, sensors tend to be characterized in the same way as electronic devices. The data sheets for many sensors are formatted just like electronic product data sheets.

    However, there are many formats in existence, and there is nothing close to an international standard for sensor specifications. The system designer will encounter a variety of interpretations of sensor performance parameters, and it can be confusing. It is important to realize that this confusion is not due to an inability to explain the meaning of the terms—rather it is a result of the fact that different parts of the sensor community have grown comfortable using these terms differently.

    2.1.1. Sensor Data Sheets

    It is important to understand the function of the data sheet in order to deal with this variability. The data sheet is primarily a marketing document. It is typically designed to highlight the positive attributes of a particular sensor and emphasize some of the potential uses of the sensor, and it might neglect to comment on some of the negative characteristics of the sensor. In many cases, the sensor has been designed to meet a particular performance specification for a specific customer, and the data sheet will concentrate on the performance parameters of greatest interest to this customer. In this case, the vendor and customer might have grown accustomed to unusual definitions for certain sensor performance parameters. Potential new users of such a sensor must recognize this situation and interpret things reasonably. Odd definitions may be encountered here and there, and most sensor data sheets are missing some pieces of information that are of interest to particular applications.

    2.1.2. Sensor Performance Characteristics Definitions

    The following are some of the more important sensor characteristics.

    2.1.2.1. Transfer Function

    The transfer function shows the functional relationship between physical input signal and electrical output signal. Usually, this relationship is represented as a graph showing the relationship between the input and output signal, and the details of this relationship may constitute a complete description of the sensor characteristics. For expensive sensors that are individually calibrated, this might take the form of the certified calibration curve.

    2.1.2.2. Sensitivity

    The sensitivity is defined in terms of the relationship between input physical signal and output electrical signal. It is generally the ratio between a small change in electrical signal to a small change in physical signal. As such, it may be expressed as the derivative of the transfer function with respect to physical signal. Typical units are volts/kelvins, millivolts/kilopascals, and the like. A thermometer would have high sensitivity if a small temperature change resulted in a large voltage change.

    2.1.2.3. Span or Dynamic Range

    The range of input physical signals that may be converted to electrical signals by the sensor is the dynamic range or span. Signals outside of this range are expected to cause unacceptably large inaccuracy. This span or dynamic range is usually specified by the sensor supplier as the range over which other performance characteristics described in the data sheets are expected to apply. Typical units are kelvins, pascals, and newtons.

    2.1.2.4. Accuracy or Uncertainty

    Uncertainty is generally defined as the largest expected error between actual and ideal output signals. Typical units are kelvins. Sometimes this is quoted as a fraction of the full-scale output or a fraction of the reading. For example, a thermometer might be guaranteed accurate to within 5% of FSO (full-scale output). Accuracy is generally considered by metrologists to be a qualitative term, while uncertainty is quantitative. For example, one sensor might have better accuracy than another if its uncertainty is 1% compared to the other with an uncertainty of 3%.

    2.1.2.5. Hysteresis

    Some sensors do not return to the same output value when the input stimulus is cycled up or down. The width of the expected error in terms of the measured quantity is defined as the hysteresis. Typical units are kelvins or percent of FSO.

    2.1.2.6. Nonlinearity (Often Called Linearity)

    Nonlinearity is the maximum deviation from a linear transfer function over the specified dynamic range. There are several measures of this error. The most common compares the actual transfer function with the best straight line, which lies midway between the two parallel lines that encompass the entire transfer function over the specified dynamic range of the device. This choice of comparison method is popular because it makes most sensors look the best. Other reference lines may be used, so the user should be careful to compare using the same reference.

    2.1.2.7. Noise

    All sensors produce some output noise in addition to the output signal. In some cases, the noise of the sensor is less than the noise of the next element in the electronics or less than the fluctuations in the physical signal, in which case it is not important. Many other cases exist in which the noise of the sensor limits the performance of the system based on the sensor. Noise is generally distributed across the frequency spectrum. Many common noise sources produce a white noise distribution, which is to say that the spectral noise density is the same at all frequencies. Johnson noise in a resistor is a good example of such a noise distribution. For white noise, the spectral noise density is characterized in units of volts/root (Hz). A distribution of this nature adds noise to a measurement with amplitude proportional to the square root of the measurement bandwidth. Since there is an inverse relationship between the bandwidth and measurement time, it can be said that the noise decreases with the square root of the measurement time.

    2.1.2.8. Resolution

    The resolution of a sensor is defined as the minimum detectable signal fluctuation. Since fluctuations are temporal phenomena, there is some relationship between the timescale for the fluctuation and the minimum detectable amplitude. Therefore, the definition of resolution must include some information about the nature of the measurement

    Enjoying the preview?
    Page 1 of 1