Sei sulla pagina 1di 93

4.

0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Module 4
Coordinate Measuring
Machines and Inspection

Page 1

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.1 Coordinate Measuring Machines (CMMs)


4.1.1 Exploration
There is no exploration exercise for this module.

4.1.2 Dialog
Overview
Coordinate measuring machines (CMMs) are machines that allow one to locate point coordinates on threedimensional objects (X, Y & Z or length width height) all at the same time. They allow integration of
both dimensions and orthogonal relationships. When linked to a computer as most are, use of a CMM
eliminates difficult and time-consuming measurements with traditional single-axis devices such as micrometers
and height gages. Cumbersome mathematics is eliminated, complex objects can be measured quantitatively,
and data can be stored for later use. Some advantages over conventional gaging are flexibility, reduced set-up
time, improved accuracy and improved productivity. Generally, no special fixtures or other gages are required.
The more complex the workpiece, the more useful the CMM becomes. A typical CMM is seen below in Figure
4.1a.

Figure 4.1a: A typical CMM Linked to a Computer


from http://www.misterinc.com/quality-control-1.shtml
Page 2

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

A Brief History
In the 1940s, the Atomic Energy Commission recognized potentials for the ideas of devices such as a CMM.
Development continued along with advances in computer technology. The CMM was first introduced
commercially in 1959 by Ferranti in England. During the first fifteen years, CMMs did not enjoy much
popularity. Also, there came an era in which the machine industry - with the automobile sector at its top changed its style of production from mass production to small production of a variety of models to answer
customers' needs. As assemblies became automated, more and more machine parts required screening
inspections instead of the conventional sampling inspections for quality assurance. The reduction of labor and
the rationalization of inspections, as well as higher-accuracy, became necessary. For this reason, combined
with the development of smaller-size, higher-performance control systems (especially microprocessor-based
personal computers), higher-accuracy reference scales, and touch signal probes (developed in England),
demands for CMMs dramatically increased.

Roles and Advantages of the CMM


The CMM has taken a lead role in many industries such as aerospace, automotive, electronics, health care,
plastics, and semiconductor. Being able to quickly and accurately obtain and evaluate dimensional data
separate the CMM from other devices. The following conditions are very well-suited to CMM usage:
1.
2.
3.
4.
5.

Short runs (length of runs cannot justify production inspection tooling)


Multiple features to inspect (geometrical and dimensional)
Flexibility desired (short runs and measure multiple features)
High unit cost (rework or scrap is costly)
Production interruption (if must inspect/pass one complex part before producing next part)

Types of CMMs
The basic unit has three perpendicular axes X, Y and Z, and each axis is fitted with a precision scale,
measuring device or transducer that continuously records the displacement of each to a fixed reference. The
third axis carriage has a probe attached to it, and when the probe touches a surface, it reads the displacement of
all three axes. The space one can work in, or the travel distance in all three axes, is known as the work
envelope. If the size of part exceeds the size of the work envelope, one can only measure features falling within
the work envelope. Although the standard ANSI/ASME B89.1.12M recognizes 10 different configurations of
CMM, there are really five different basic types. These are pictured below in Figure 4.1b.

Page 3

4.0 Coordinate Measuring Machines and Inspection

Figure 4.1b: The 5 Basic Types of CMM


from fig 17-4, page 528 of Dim Metr by Dotson et al

Page 4

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

For a cantilever type, a vertical probe moves in the Z-axis, which is attached to a cantilevered arm that moves
in the Y-axis. This arm also moves in the X-axis, or the table can have X-axis independent travel. One has
easy access to the work area, and allows for a rather large work envelope without using up too much floor
space. The bridge type CMM is probably the most popular. It is similar to the cantilever type because of the
Y-axis support. The bridge construction adds rigidity to the machine, but both ends of the Y-axis must track at
the same rate. Loading the machine can be difficult at times because there are two legs that touch the base. The
column type machine is like a jig borer, and similar to the moving table cantilever type. It is very rigid and
accurate. The Z-axis is connected to the arm and the table moves independently in the X and Y-axis. The
horizontal arm type machine is also called a layout machine. The probe is carried along the Y-axis moving
arm. The Y-axis arm moves up and down on a column for Z-axis travel, and the column can move along the
edge of the work table in the X direction. It has a very large, unobstructed work area, which makes it ideal for
very large parts. For the gantry type machine, support of the workpiece is independent of the X and Y axes,
which are overhead and supported by four vertical columns rising from the floor. One can walk along a
workpiece with the probe, which is helpful for very large workpieces.
Some CMMs are even equipped with rotary tables or probe spindles (for rotation). These features are more for
convenience, as one can produce the same data if equipped with computer or microprocessor, but they can
speed up inspection at times.
There are four basic modes of operation:
1.
2.
3.
4.

Manual
Manual computer-assisted
Motorized computer-assisted
Direct computer-controlled

The term computer can mean computer, microprocessor, or programmable controller. For manual mode, the
CMM operator moves a free-floating probe in all three axes and establishes contact with the part feature being
assessed. In manual computer-assisted mode, an electronic digital display makes zero settings, changes signs,
converts between inch decimal inch SI (metric) systems, and prints out data. The motorized computerassisted mode features a joystick to drive machine axes. The operator manipulates the joystick to bring the
probe into contact with the workpiece. The direct computer-controlled (DCC) mode is fully programmable.
Computer-assisted drafting (CAD) data teaches the CMM where to find and cause the probe sensor to contact
the part and collect data. An operator loads the workpiece and then lets the machine operate automatically. The
methodology is similar to a computer numerically control (CNC) machine tool. The three main components are
move (directs probe to data collection points), measure (compares distance traveled with an axis standard) and
format (translates data for display or printout) commands.
There are also portable arm CMMs available when the workpiece cannot be moved to a machine. They
consist of an articulated measuring arm with precise bearings and rotary transducers at each joint. The arms
base has a mounting plate for attachment to a fixed surface or special tripod. These are not as accurate as fixed
machines, but can be necessary and advantageous over other means.

Accuracy and Repeatability


CMMs in smaller units are generally accurate to within 0.0004 and repeatable to within 0.0001. These
figures degrade somewhat when size of the unit increases. For best results, it is recommended to keep a unit in
a temperature and humidity-controlled environment. This reduces the amount of thermal expansion that can
occur in machine parts and workpieces. Granite work tables are generally used as they are poor heat
conductors. Many machines incorporate structural materials with low coefficients of expansion like Invar a
nickel-iron composite. Transducers are also incorporated, which measure the growth of an axis due to
temperature change on a continual basis, and internal mathematical algorithms can compensate for the growth

Page 5

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

when taking measurements. For best results, vibration damping is often recommended. Periodic measurement
of a known master object is used to detect variances and system flaws.
Accuracy of some CMMs is enhanced through the use of software geometry error compensation. This package
automatically interpolates the probe position throughout its measurement envelope. Each axis is corrected for
pitch, yaw, scale error, straightness and squareness errors with respect to other axes.

Part Probes
The probe is the sensing element that makes contact and causes readings to be taken. It is the heart of the CMM
unit. There are two general categories of probe contact and non-contact. Early CMMs had a solid member
that made contact much like other measuring instruments like vernier and micrometer calipers. Todays
contact probes can be of the touch-trigger or analog scanning type. An example of a contact probe is shown
below in Figure 4.1c.

Figure 4.1c: A Renishaw Contact Probe


from fig 17-8, page 532 of Dim Metr by
Dotson et al

The touch-trigger contact probe works using a sensor to indicate a difference in contact resistance - the probe
has deflected and thus contact has been made. The ball at the tip of the probe is of precise dimension to know
the contact point in the X, Y or Z direction. The computer records the contact point in coordinate space, and
contact is generally accompanied by an LED and audible tone for the operator. For delicate, flexible or soft
materials, there are low-trigger-force, high-sensitivity probes available which require less force to deflect the
probe and indicate contact made. The simplest probes are sensitive and trigger in the X and Y axes only, and
are referred to as four-way probes. If there is capability for a one-way trigger response in the Z-axis as well, it
is called a five-way probe, and if in plus or minus direction of the Z-axis, it is a six-way probe.
The probe head is mounted to the end of one of the CMMs moving axes. The actual probe is attached to the
probe head, and there are various types of extensions available to reach difficult access areas of a part. The
stylus is attached to the probe, and is the contact device. These parts are shown in Figure 4.1d below. The most
common stylus tip is a ruby ball for great wear life, but other tips and even clusters of multiple tips are
available. Some styles can be seen in Figure 4.1e below.
Page 6

4.0 Coordinate Measuring Machines and Inspection

Figure 4.1d: Contact Probe Parts, Variations,


Multiple Styli
from fig 17-11, page 533 of Dim Metr by
Dotson et al

Measurement
and Quality

Figure 4.1e: Stylus Tips


from fig 17-12, page 533 of Dim Metr by Dotson
et al

Some of the more common types and uses of probe stylus tips are illustrated and explained in this subsection.

Figure 4.1f: Ruby Ball Stylus


from
http://www.renishaw.com/client/product/UKEnglish/PGP147.shtml

The ball stylus type is suitable for the majority of probing applications, and incorporates a highly spherical
industrial ruby ball. Ruby is an extremely hard ceramic material and thus wear of the stylus ball is minimized.
It is also of low density - keeping tip mass to a minimum, which avoids unwanted probe triggers caused by
machine motion or vibration. Ruby balls are available mounted on a variety of materials including nonmagnetic stainless steel, ceramic and carbide to maintain stiffness over the total range of styli.

Page 7

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.1g: Star Stylus


from
http://www.renishaw.com/client
/product/UKEnglish/PGP147.shtml

The star stylus can be used to inspect a variety of different features. Using this stylus to inspect the extreme
points of internal features such as the sides or grooves in a bore minimizes the need to move the probe, due to
its multi-tip probing capability. Each tip on a star stylus requires datuming in the same manner as a single ball
stylus.

Figure 4.1h: Pointer Stylus


from
http://www.renishaw.com/client
/product/UKEnglish/PGP147.shtml

The pointer stylus should not he used for conventional XY Probing. It is designed for the measurement of
thread forms, specific points and scribed lines (to lower accuracy). The use of a radius end pointer stylus allows
more accurate datuming and probing of features, and can also be used to inspect the location of very small
holes.

Figure 4.1i: Ceramic Hollow Ball Stylus


from
http://www.renishaw.com/client/product/U
KEnglish/PGP-147.shtml

The ceramic hollow ball style is ideal for probing deep features and bores in X, Y and Z directions with the
need to datum only one ball. Also, the effects of very rough surfaces can be averaged out by probing with such
a large diameter ball.

Figure 4.1j: Disc Stylus


from
http://www.renishaw.com/client/product/U
KEnglish/PGP-147.shtml

Page 8

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The disc stylus, or 'thin section' of a large sphere, is usually used to probe undercuts and grooves. Although
probing with the "spherical edge" of a simple disc is effectively the same as probing on or about the equator of a
large stylus ball, only a small area of this ball surface is available for contact. Thinner discs require careful
angular alignment to ensure correct contact of the disc surface with the feature being probed. A simple disc
requires datuming on only one diameter (usually in a ring gauge) but limits effective probing to only X and Y
directions. Adding a radius end roller allows one to datum and probe in the Z direction, providing the center of
the 'radius end roller' extends beyond the diameter of the probe. The radius end roller can be datumed on a
sphere or a slip gauge. Rotating and locking the disc about its center axis allows the 'radius end roller' to be
positioned to suit the application. A Disc may also have a threaded center to allow the fixing of a center stylus,
giving the additional flexibility of probing the bottom of deep bores (where access for the disc may be limited).

Figure 4.1k: Cylinder Stylus


from
http://www.renishaw.com/client/product/U
KEnglish/PGP-147.shtml

The cylinder stylus is used for probing holes in thin sheet material, probing various threaded features and
locating the centers of tapped holes. A ball-ended cylinder stylus allows full datuming and probing in X, Y and
Z directions, thus allowing surface inspection to be carried out.

Figure 4.1l: Stylus Extensions


from
http://www.renishaw.com/client/product/U
KEnglish/PGP-147.shtml

An extension provides added probing penetration by extending the stylus away from the probe. However, using
a stylus extension can reduce accuracy due to loss of rigidity.
An analog scanning probe is used for measuring contoured surfaces like a turbine blade. This probe does not
deflect, but remains in contact with the part surface as it moves, and produces an analog reading of the path
taken. Complex geometries can be mapped out or data gathered for reverse engineering applications.
Probes require occasional calibration routines against a known master object to detect the need for adjustment.
Because the tip of the stylus has a known radius, the software calculates the offsets so it knows the actual point
of contact in space.
Although the touch probes are the most common, there are situations that require non-contact probes.
Measuring printed circuit boards or a clay/wax model where the object could deform under contact, can
require that no contact with the part be made. A laser scanning probe projects a light beam onto the surface of
the part, and location is done via a triangulation lens in the receptor. A large wax model for an automobile may
be a potential candidate for this method, as seen below in Figure 4.1m.

Page 9

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.1m: A Laser Scanning Probe


from fig 17-17, page 535 of Dim Metr by Dotson et al
A video probe relies on a high resolution image, which is magnified and digitized. Pixel count of the
electronic image provides the measurement. The eye piece is often equipped with a cross-hair reticle (see
module 4.3 on Measuring Microscopes for description) to determine exact location. Many measurements can
be generated within a single frame. A printed circuit board may be an ideal candidate for this method, as seen
below in Figure 4.1n.

Figure 4.1n: A Video Probe


from
http://www.cmmtechnology.com/svideo.htm
Page 10

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Software, Programs and Modes of Operation


Most software for CMMs today are menu-driven and very user-friendly. Subroutines, or automatic program
steps, for various types of inspection types are contained in a library for easy recall. The operator uses the menu
to create programs designed to move to a location, measure a feature, store results (for later printout), and move
to the next. It serves to reduce operator fatigue for repetitive tasks.
Conversions can be done between Cartesian, polar, and in some cases even spherical coordinate systems.
Systems can be set up to calculate the difference between actual and nominal (ideal) conditions and flag any
out-of-spec conditions.
Stored geometry and internal calculations allow for minimum probe touches to get a measurement. For
instance, it takes only three touches to define and measure a circle, four for a sphere, and five for a cylinder.
Some of these subroutines for shapes and geometric form/positional relationships are shown below in Figure
4.1o. A reference plane can be established by locating three points. Some other comparison subroutines are
illustrated in Figure 4.1p below.

Figure 4.1o: Subroutine Examples for


Various Shapes and Features
from fig 17-22, page 539 of Dim Metr by
Dotson et al

Page 11

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.1p: Subroutine Examples for


Some Feature Comparisons
from fig 17-23, page 540 of Dim Metr by
Dotson et al
Manual mode is generally used for one-time part inspection. The keyboard or mouse is used to select programs
from a menu. One obtains measurement data with the CMM and the computer provides the measurement in
useful form. In automatic mode, the CMM-computer uses a previously-written program for the measurement
task(s). The computer provides the measurements in proper sequence and the computer determines results. For
a DCC-type machine (mentioned earlier under Types of CMMs), the program also drives the CMM to all
data collection points automatically. In programming mode, one actually teaches the machine how to
perform a task by taking the machine through a sequence of steps. That routine is placed in memory for later
recall to repeat or use in another program. The sequence can be adjusted as necessary for other tasks. There is
often a statistical analysis mode available if doing a large number of similar parts, and one wishes to perform
statistical process control (SPC) that can warn of out-of-control trends in the manufacturing process. One can
then make corrections before more defects are produced. For more information on SPC, refer to module 4.4.

Using the CMM


An example of an actual CMM is the Helmel Checkmaster 112-102, which is pictured below in Figure 4.1q.

Page 12

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.1q: Helmel Checkmaster 112-102


from
http://www.helmel.com/checkmaster.html

This is a moving bridge type benchtop CMM, with a measuring area of 12 x 12 x 10 ( X Y Z ). This unit
is a manual mode machine, but can be purchased as a DCC unit. The advertised measurement resolution is
0.00002, linear accuracy is 0.00018 + 0.000006/inch, and has a repeatability of 0.00016 in manual mode.
The machine can accept the full line of available Renishaw touch probes, and can accept a video probe as an
option.
The unit is equipped with a granite work table, locks and zero-play fine adjustments on all axes, computer with
a Windows operating system, Geomet CMM software, and a calibration sphere (master). Since the basic
machine is in manual mode, the operator moves the probe head and axes.
When a different probe head is used, it may weigh differently than the former head used. There is a
counterbalance adjustment for the Z-axis (vertical) so that the probe bar will not move by itself when in any
position. On the back of the tower, a knob will adjust for less counterbalance (- sign or counterclockwise) or
more counterbalance (+ sign or clockwise).
There are locks and fine adjustments for each axis so that the fine adjustment can be used to move an axis and
take a fine reading. The locks should never be over-tightened.
Certain preventive maintenance procedures should be followed:

Each of the three axes has a scale and encoder head for determining its position during travel. One
need only rub the scale lightly in one direction with a denatured alcohol-soaked cotton ball to clean it.
This generally needs to be done only twice a year. The encoder head is not to be cleaned, except by a
trained service technician. Keeping the scale clean will keep the encoder clean. Although the X and
Y-axis scales are exposed, the Z-axis scale is covered. Remove the tower cover. To do this, first
remove the fine adjust and lock knobs. Do not pinch any wires or scratch surfaces when replacing the
cover.

Page 13

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

There are ground surfaces located on the bridge, leg rail and probe bar. They should be kept free of
rust and debris, and cleaned at least once per month. To clean, one should use denatured alcohol and a
lint-free cloth to prepare the surface, and then follow up with an application of a thin coat of light oil
(not grease) - again using a lint-free cloth.

The probe bar should be cleaned and lubricated daily (at least each day used) in the same manner as
ground surfaces to prevent rusting. This is due to frequent handling.

Since granite is a porous material, clean oil off parts to be inspected first. Oil can stain. Do not place
foreign objects on the plate and remove inspected parts once through. Rust can form and stain if left
on the plate. Stains can be reduced using a light amount of denatured alcohol and medium-grade
scouring pad (not rubbing too hard). Use only a light amount of special surface plate cleaner and lintfree cloth to clean the plate, and allow to fully dry before placing parts on the table again.

The granite plate should be kept level for best results. It is recommended to use a 0.005/foot bubble
level and adjust the knobs at the bottom of the table legs; check the condition periodically.

This CMM is supplied with a Renishaw TP-ES probe. To change or remove the head, use a properly sized
metric allen wrench to loosen the attachment set screw, unplug the probe head cable, and pull the probe from
the socket. Any head must have data entered into the Geomet PCS set-up section before using (see below).
This unit is supplied with Geomet CMM software on the attached computer, which has a version of Windows as
an operating system. Upon entering the Geomet program, one will encounter a main screen from where all
other features are accessed. The actual manual should be studied more fully before proceeding with any
inspections. Below is just a brief overview.
New inspection files can be initiated, files saved, old files retrieved, report headers created, and printing or
exporting of data can be done under the File drop-down on the menu.
Mastering (qualifying) a ball stylus and entering qualifying sphere data can be performed under the Qualify
pull-down menu. This tells the system what size ball is on the tip of the stylus, etc. so that measurements are
compensated and thus accurate. The reference sphere that is supplied with the CMM is screwed into one of the
threaded holes on the granite table. There are instructions in the manual for using various types of stylus tips
and even video probes - as discussed in the probe section earlier.
When one places a workpiece to be inspected on the CMM granite table, it can be placed in any orientation.
Datum references must be chosen before accurate inspection can begin. Generally, one would use the part
drawing to see what surfaces are to be chosen as datum references for dimensions and features. The PCS (or
part coordinate system) menu pull-down allows one to establish datum surfaces on the part before other
measurements are taken.
The Measure pull-down will walk the user through a routine to measure various types of features such as lines,
circles, planes, cylinders, etc. Some of these basic techniques were discussed earlier in the Software, Programs
and Modes of Operation section, and shown in Figures 4.1o and 4.1p.
The Construct pull-down will allow one to take features previously measured and perform operations to
determine relationships between the features.
The Tolerances pull-down allows one to check geometric tolerances of form for part features (e.g. circularity of
a hole, parallelism, flatness, etc.), or allows one to add bi-lateral tolerances to certain features.

Page 14

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The Modify and System pull-downs allow logistical modifications such as undo, insert, delete, decimal place
precision, etc.
There are shortcut keys to direct one to various features without using the menu system, and are used by the
experienced operator. The main screen also provides the present probe position in X, Y and Z-axis, and allows
means to change readout precision and zeroing.
Most of the remainder of the manual deals with carrying out operations with a sample part that is supplied with
the CMM unit. We will actually utilize the sample part in the Application exercise that follows.
To perform the application exercise, one will need to refer to the Geomet manual sections 1-3: Initiating
Geomet, 1-17: Creating Report Headers, 2-1: Probe Qualification, section 2: coordinate systems, 3-13 and
similar: Basic Alignment of an Object and/or 3-17: Exercise 3.2, etc., and then measure features. It is probably
best to do the application exercise as a small group so discussions and discovery can take place.

Page 15

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.1.3 Application: Using the CMM


Materials: Helmel Checkmaster Model 112-102 CMM equipped with Geomet software package and Renishaw
TP-ES touch probe.
Parts: Helmel Checkmaster Geowidget Sample Part, plus part drawing showing features to inspect
List features to be measured on attached part drawing and record drawing specs for each. Using the measuring
microscope, measure each of the features and record actual measurements found. Answer questions below.

Part Feature

Specified Dimension (&


Tolerance)

Actual Measurement

1.
2.
3.
4.
5.
6.
7.
8.
9.
Questions
1.) Were you able to take readings with the supplied instruments? Describe any difficulty you had.
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
2.) According to your measurements, is this part within drawing specification?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________

Page 16

4.0 Coordinate Measuring Machines and Inspection

Geowidget Part Drawing (use for Application Exercise)

Page 17

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Bibliography
Dotson, Connie, Rodger Harlow, and Richard Thompson. Fundamentals of Dimensional Metrology, 4th Edition.
Thomson Delmar Learning, 2003.

Page 18

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.2 Roundness Checking


4.2.1 Introduction
Roundness of cylindrical parts can be thought of as the equivalent of flatness for planar parts. Roundness is
therefore thought of as an aspect of surface metrology where all parts of a circle are equal. Roundness is
important when considering spherical parts, such as bearings, and also in gaging used for inspection. As a
feature, roundness is typically hard to control, depending on the manufacturing process used in creating a part,
and it is equally difficult to measure.
4.2.2 Exploration: How Do We Measure Roundness?
Materials: Vernier calipers, Micrometers, Ruler, Other measuring instruments
Parts: Round stock of any size, soft drink can
Questions:
1.) Of the two objects given, which appears to be the closest to a perfect cylinder and why?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
2.) Can you verify your answer using the measuring instruments provided? Why or why not?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
3.) Discuss ways in which you could check the roundness of a part.
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________

Page 19

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.2.3 Dialog: Roundness Checking Methods


In cylindrical parts, deviations from a true cylinder are referred to as lobes, Figure 4.2a . Lobes are high areas
around the circumference of the part that result in a part being out-of-round. The lobes can be symmetrical
(opposite one another) or nonsymmetrical (evenly spaced but not opposite).

Figure 11.37 in The Quality Technicians Handbook, Griffith.

Figure 4.2a: Lobes can be symmetrical or asymmetrical


These lobes can affect the perceived dimensional measurements such as a measurement of diameter. In Figure
4.2b, we see if we were measuring the diameter with a set of micrometers we could measure the cylinders in a
number of locations around the circumference and get approximately the same measurement with little to know
indication that it is not perfectly round. With parts that are symmetrical it is possible to obtain some indication
or roundness because the lobes do not deceive the measurement, Figure 4.2c.

Figure 11.38 in The Quality Technicians Handbook, Griffith.

Figure 4.2b: It is difficult to determine roundness


from simple diametrical measurement with an odd
number of lobes.

Page 20

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.2c: Even number of lobes makes it easier to determine


roundness from simple diametrical measurement.

Lobes and V-Block Method


With the advent of the scientific approach to measurement many different measuring aids were developed. One
such device, the V-block proved useful for evaluating the roundness of a part. A V-block can be used in
conjunction with a comparator instrument to determine roundness. However, the V-block must be the correct
angle to match the number of lobes if they are to be effective. For example, in Figure 4.2d, if we use a V-block
in which the included angle of the block is in phase with the lobes, the part will appear to be round, (1). If the
included angle is not in phase, (2), then roundness can be determined. The best way to determine the proper
included angle for the V-block is by using the following formula.

360
Included Angle = 180

n
where n is equal to the number of equally spaced lobes on the part.
Figure 4.2c: Roundness can
be determined using a Vblock when the included
angle is not in phase with
the lobes.

(2)

(1)

Page 21

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.2.4 Application: Checking Roundness Using V-Block Method


Materials: V-block, dial indicator and stand.
Parts: Round stock of any size, soft drink can
Questions:
1.) Measure the included angle on the V-block and enter here: _______________
Place the round stock in the V-block and bring the indicator in contact with the surface of the part. Zero the
indicator a this point. Rotate the part in the block and answer the following questions. Repeat for second part.
2.) How many lobes does the part have? Does the include angle of your block indicate that your measurements
will be in phase or out of phase?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
3.) Does the part appear to be truly round? Why or why not?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
4.2.5 Dialog: Polar Chart Method
As discussed in the preceding section, the V-block method for determining roundness can be difficult to
produce accurate measurements depending on the number of lobes and the angle of the V-block. Therefore,
additional methods were developed to determine roundness. The most popular method is the creation of a polar
chart. Polar charts are produced with stylus instruments that have a spindle that carries the stylus head to ride
along the surface of the part as it rotates on the turntable, Figure 4.2d.

Page 22

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Turntable
Spindle

Figure 4.2d: Mitutoyo Roundtest RA-300


These instruments are true datum instruments where
the spindle controls that path of the stylus (Figure
4.2e), so that the stylus is unable to skid across the
part surface. Because it is referenced from a true
datum, the stylus can pick up the departure from its
circular reference path. These deviations are
amplified and can be recorded on a polar chart, Figure
4.2f. The roundness of the part is represented by the
difference between the two concentric circles that
bound the circular reference center. It is important to
remember that the polar chart is not a graph of the
part shape. It is merely a representation of the
departures from true roundness. When we look at a
polar graph, we can more easily discriminate between
Stylus
the lobes and the smaller departures from true
roundness. For example, in Figure 4.2f, the
deviations in the graph (1, 2, and 3) indicate that the
Figure 4.2f: The stylus records the deviations from
part has three lobes. The approximate locations of
the true circular path.
each lobe can be determined based on the angular scale
on the graph. In the figure, lobe 3 spans across the 360 mark on the chart.

Page 23

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

90

1
2
180

Figure 4.2f: Example of a


Polar chart.

360
Figure from Mitutoyo Users Manual, Roundtest RA-300, pg 5-8

If we were to linearize the polar chart above it would look like the strip chart shown in Figure 4.2g. We can
still see the lobes; however, if the deviations were not very great, the lobes would stand out on the linear chart
as well as the polar. Therefore, polar charts are more commonly used in practice. Deviations in roundness are
typically specified and measured in micrometers (m). One micrometer is equivalent to one thousandth of a
millimeter. With deviations that small, it is no wonder that the deviations are amplified when charted.

3
Figure 4.2g: Linear chart
showing roundness deviations.
2

Figure from Mitutoyo Users Manual, Roundtest RA-300, pg 5-8

Page 24

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Using a Roundness Testing Instrument


As with any measuring instrument it must always be cleaned, calibrated, and in proper working order before
use. Once these are checked it is important to make sure that the workpiece is centered and level on the
turntable. If the axis of the workpiece being measured is not perpendicular to the reference axis, the path of the
stylus will result in an ellipse and the measurement will be inaccurate. If the workpiece is not centered with the
axis of turntable, the measurements will exceed the measuring range of the instrument due to the exaggerated
deviations from the reference circle. The methods used to center and level the workpiece on the table vary from
instrument to instrument so consult the users guide for the proper procedure.
Once the workpiece and instrument are set-up properly, the roundness measurement can begin. The instrument
will rotate the workpiece using the turntable while the stylus traces a path and the deviations are recorded.
There are four methods for determining the center of the profile recorded by the stylus, and therefore roundness.
1. Least Squares Center Method (LSC)
2. Minimum Zone Center Method (MZC)
3. Maximum Inscribed Circle Center Method (MIC)
4. Minimum Circumscribed Circle Center Method (MCC)
The most commonly used of the four methods is the Least Squares Center Method (LSC). In this method,
roundness is defined as the minimum difference between the radii of two concentric circles that are inscribed
and circumscribed on the measured profile, Figure 4.2h. The center of the concentric circle is determined by
the sum of the squares of the deviations of the measured profile. The roundness value is simply the difference
between the radii of the two concentric circles that bound the profile.

Inscribed Circle
Circumscribed Circle

Mean Deviation

Figure from Mitutoyo Users Manual, Roundtest RA-300, pg 5-2

Figure 4.2h: Least squares center method polar chart

Generally the resulting output of a roundness checking instrument will contain the following items.
1. Roundness value (defined above).
2. The coordinates of the locations of the part relative to the reference axis.
3. The peak height of the profile deviation. This is measured as the maximum deviation from the mean.
4. The valley depth, which is the minimum deviation form the mean.

Page 25

4.0 Coordinate Measuring Machines and Inspection

5.

Measurement
and Quality

Mean roundness, Figure 4.2i, based on the absolute value of the deviations from the mean circle
calculated as follows:
Mean Roundness =

ri

where ri is the deviaion and n is the number of samples

Figure from Mitutoyo Users Manual, Roundtest RA-300, pg 5-3

Figure 4.2i: Calculation of mean roundness.


6.

Number of peaks outside of the mean circle, Figure 4.2j.

Figure from Mitutoyo Users Manual, Roundtest RA-300, pg 5-3

Figure 4.2j: 6 peaks on object.

For more detailed information on roundness, refer to ANSI Y14.5 Dimensioning and Tolerancing and ANSI
B89.31 Roundness Measurement.

Page 26

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.2.6 Application: Roundness


Materials: Roundness checking instrument.
Parts: Round stock of any size, soft drink can
Questions:
1.) Set-up the roundness checking instrument per the manufacturers instructions and measure the roundness of
both parts. Examine the polar or strip chart output and then answer the following questions.
2.) How many lobes does the part have? Mark them on the chart.
__________________________________________________________________________________________
3.) What the roundness value? Does the part appear to be truly round? Why or why not?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________

Page 27

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Bibliography
Dotson, Connie, Rodger Harlow, and Richard Thompson. Fundamentals of Dimensional Metrology, 4th Edition.
Thomson Delmar Learning, 2003.
Griffth, Gary K. The Quality Technicians Handbook, 5th Edition. Prentice Hall, 2003.
Users Manual No.471OGB Series No.211, Roundtest RA-300. Mitutoyo, Inc.
http://www.mitutoyo.com/AboutMAC/PR/cs.html

Page 28

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.3 Use of Microscopes & Optical Comparators


4.3.1 Exploration
There is no exploration exercise for this module.

4.3.2 Dialog
Using Light and Optics for Magnification
As mentioned at the end of the earlier module, 3.4 Optical Theory of Light Waves, one of the four primary uses
of light in metrology was to visually enlarge objects for more precise examination and measurement, or
magnification. Microscopes and optical comparators are prime examples where this principle is used. In a
basic sense, both involve projection of an enlarged image onto a screen to allow easier study. We will first
cover microscopes in more depth, and then provide detail on optical comparators later in the module.
Optical comparators utilize the techniques of shadow projection or profile projection. A microscope could
actually be adapted to perform profile projection and an optical comparator could be adapted to observe
surfaces.

Microscope Introduction
Within the use of microscopes, or microscopy, there are two basic divisions direct measurement and
positioning. When the instrument is in itself the standard, we are using direct measurement. An item that will
be covered in more depth later the reticle is both the standard and the means of reading that standard. Direct
measurement can involve (a) linear measurements, (b) surface topography and metallurgical measurements, and
(c) measurement of features inaccessible by other means.
The most extensive use of a microscope is as a positioner. After positioning a part with the microscope so we
can see the part feature and a scale, we actually use another instrument to make the measurement. When used
as a positioner, we are using the displacement method.

Brief History of Lenses and Microscopes


In the unrecorded past, someone picked up a piece of transparent crystal thicker in the middle than at the edges,
looked through it, and discovered that it made things look larger. Someone also found that such a crystal would
focus the sun's rays and set fire to a piece of parchment or cloth. Magnifiers and "burning glasses" or
"magnifying glasses" are mentioned in the writings of Roman philosophers during the first century A.D., but
apparently they were not used much until the invention of spectacles toward the end of the 13th century. They
were named lenses because they are shaped like the seeds of a lentil.
It is not quite certain who invented the microscope but it was probably spectacle makers in The Netherlands,
who invented both the compound microscope and the refracting telescope between 1590 and 1610. In 1609,
Galileo, father of modern physics and astronomy, heard of these early experiments, worked out the principles of
lenses, and made a much better instrument with a focusing device. The first well known users were Van
Leeuwenhoek and Hooke. Van Leeuwenhoek used a single lens microscope and Hooke used a compound
microscope. The early instruments used by Van Leeuwenhoek were far superior to those of Hooke. Lens
corrections were unknown at the time and the compound microscopes used by Hooke added the lens faults of
ocular and objective. It was Van Leeuwenhoek who made the most discoveries due to his sharp eyesight and

Page 29

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

unfailing curiosity. The earliest simple microscope was merely a tube with a plate for the object at one end and,
at the other, a lens which gave a magnification less than ten diameters - ten times the actual size. These
provided wonder when used to view fleas or tiny creeping things and so were dubbed "flea glasses." The closeup image in Figure 4.3a below shows the simplicity of a Van Leeuwenhoek microscope. A subject was placed
on the needle and could be positioned with the adjusting screw.

Figure 4.3a: A Van Leeuwenhoek


Microscope Close-up
from http://www.microscopyuk.org.uk/index.html?http://www.mi
croscopy-uk.org.uk/intro/

Eventually the compound microscope took over, as shown below in Figure 4.3b. Although makers made
beautiful instruments, the real improvement of the microscope came with the invention of the achromatic lens.
Achromatic lenses for spectacles were developed by Chester Moore Hall around 1729. It was difficult to make
small high power achromatic lenses. Jan and Harmanus van Deyl were the first to make these lenses at the end
of the eighteenth century, and Harmanus van Deyl started the commercial fabrication of achromatic objectives
in 1807. The lenses with the highest numerical apertures were produced around 1900.

Figure 4.3b: A Simple Compound


Microscope
from http://www.microscopyuk.org.uk/index.html?http://www.mi
croscopy-uk.org.uk/intro/

Page 30

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

A famous producer of top quality microscopes was the firm of Powell and Lealand, which made very high
power objectives. Carl Zeiss Jena produced its first oil immersion objective in 1880. Due to better mass
fabrication techniques, microscope manufacture was concentrated in Germany after the beginning of the
twentieth century. Further developments followed - such as improvement of the microscope stand and the
development of methods to increase the contrast. The phase contrast microscope was invented in 1934. Other
contrast enhancing methods were developed such as modulation contrast and differential interference contrast
(DIC), both with several variants. Fluorescence microscopy became a very valuable addition to light
microscopy since about 1970.
Present day instruments give magnifications up to 1250 diameters with ordinary light and up to 5000 with blue
light. A light microscope, even one with perfect lenses and perfect illumination, simply cannot be used to
distinguish objects that are smaller than half the wavelength of light. White light has an average 1/2wavelength of 0.275 micrometers. Any two lines that are closer together than 0.275 micrometers will be seen
as a single line, and any object with a diameter smaller than 0.275 micrometers will be invisible or, at best,
show up as a blur. To see tinier particles under a microscope, scientists must bypass light altogether and use a
different sort of "illumination," one with a shorter wavelength known as an electron microscope.

Optical Principles for Lenses


Certain basic principles of light were discussed in more detail in Module 3.4. We will concentrate on reflection
and refraction. When light is directed through a lens, which is thicker in the middle than toward the edges, a
few things occur. Some light is reflected (according to laws of reflection), while other light is refracted
(according to laws of refraction). Other light could be scattered or absorbed. Four important principles to note
are:
1.
2.
3.
4.

The angles of incidence and reflection are equal, but the angle of a refracted ray varies according to the
refractive index of the material through which the ray passes.
All of these rays lie in the same plane.
When a ray is normal (90o) to a surface, the angles of incidence, reflection and refraction are all zero.
Depending on the refraction of the material, there is an angle, which if exceeded, will not have a
corresponding angle of refraction. The ray will be reflected internally, and this angle is known as the
critical angle.

A lens, which was discussed early in the Historical section above, has two highly polished and opposing
convexly curved surfaces, as shown in Figure 4.3c below. If this lens were sliced into extremely thin sections,
as shown in the figure, the amount of curvature would become negligible. Each section would behave like a
prism. The resulting double refraction (entering and exiting the material) would cause light from a distant
source to converge to a point.

Figure 4.3c: Refraction Through a Lens


from Appendix C, fig C, page 588 of Dim.
Metrol by Dotson
Page 31

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3d shows that some of the light is reflected off a lens and is lost. All of the refracted light adds to the
eventual image (at far right of the figure). Reflected light is not only lost, but can create some problems like
noise in an electrical system.

Figure 4.3d: Refracted Light Contributing


to an Image; Reflected Light Lost
from Appendix C, fig D, page 589 of Dim.
Metrol by Dotson
This is true of converging lenses, where one (or both) of the surfaces is concave. Lenses with convex surfaces
are diverging lenses. Both types are shown in Figure 4.3e below.

Figure 4.3e: Converging and Diverging Lenses


from Appendix C, fig E, page 589 of Dim. Metrol by Dotson

For the converging lens, rays converge into a real image. Diverging lenses cause the rays to fan out, and the
point that they appear to come from is the virtual image. Figure 3.4f below shows various locations of objects
and images with converging lenses.

Page 32

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3f: Locations of Objects and Images with Converging Lenses


from Appendix C, fig F, page 589 of Dim. Metrol by Dotson

Each view shows light rays starting at a common object point, and then reuniting at a termination point on the
opposite side known as the image. Together, they are known as conjugate points. The image is closest to the
lens from the most distant light sources, and that point is known as the primary focus of the lens. The distance
from the lens to the image is also known as the focal length of the lens, and is its chief identifying feature.
When viewing things in the real world, we work with objects of size and not points, and this affects images.
Any image seen consists of an infinite number of points in an orderly distribution. Rays from any point on an
object pass through all parts of the lens on their way to becoming an image. Even if a lens were cut in half, as
shown below in Figure 4.3g, the image would still show the entire original object.

Figure 4.3g: Image from Full Lens


vs. Half Lens from Appendix C,
fig G, page 590 of Dim. Metrol by
Dotson

Page 33

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The only difference would be the amount of light. In the top view above, the image will contain all of the light
from the source received by the first lens surface less reflection, scattering and absorption. In the lower view
above, the image would receive half of the light of the top view, but still be the entire image. One can cover
half of a lens, but the image will still be a full circle.

Lens Complications
One type of complication with lenses is the problem of inversion and reversion. As shown in Figure 4.3h
below, an image consists of an infinite number of rays, but that has been simplified to just three, A B and C.
The real image from the convergent lens gets inverted from the original object. With a divergent lens, this is
not the case.

Figure 4.3h: Inversion with a Convergent Lens, but not with a


Divergent One from Appendix C, fig H, page 590 of Dim.
Metrol by Dotson

Page 34

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

There are actually four different possibilities that can occur, as shown below in Figure 4.3i. Not only can top
invert to bottom, but left can revert to left, or both inversion and reversion can occur.

Figure 4.3i: Inversion, Reversion, and


Combination of Both from Appendix C, fig I
(use just upper half with letter F), page 591 of
Dim. Metrol by Dotson

Up until now, we really assumed light with one wavelength, but that is not really the case in reality. As shown
in Module 3.4, white light is made of different colored light each with a different wavelength. Since each
color has a different wavelength, each gets refracted a differing amount when passing through a lens of the
same refractive index, similar to what happens when passing white light through a prism. One will tend to see
rainbow-like fuzziness when viewing through a simple lens. This effect is called chromatic aberration.
At low magnification powers, it is not much of a problem, but at high magnifications, one sees images created
at all wavelengths. Chromatic aberration is corrected through the use of achromatic lenses, as shown in Figure
4.3j. These utilize lenses of different refractive indices to bring back together the diverging wavelengths of
light. In the simplified example shown, two elements are used to correct red and blue light. The second
element is designed to refract the red wavelength less than the first element. The blue wavelength thus catches
back up to red and combine into white light.

Page 35

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3j: Achromatic Lens Concept


from Appendix C, fig K , page 591 of Dim.
Metrol by Dotson

One can also experience spherical aberration. This is due to an inherent defect, degree of curvature or other, in
the lens. Rays passing through the center are focused in a different direction than those from the outer edges,
and result in an unclear image. The correction is also by using multiple elements for the lens.
Both of the previously-mentioned aberrations are quite common in most optical systems. They often both occur
at the same time and thus require intensive calculations for even the simplest of systems. Sophisticated systems
like those in cameras often have lenses with 10+ elements. Prior to computer use, a lens designer could spend
most of his years perfecting only a small number of systems.
In many systems, one will find diaphragms known as stops being used. Generally, the outer-most areas of lens
have the greatest amount of aberrations. Stops assist in controlling, and thus restricting the light-gathering
capability, but also reduce the aberrations. The most restrictive stop in any system is known as the aperture, or
opening size, of the system. It is also a measure of the light-gathering capability of the system.
A simple diagram of the stop and lens is shown in Figure 4.3k below. The stop acts to refract the outermost
rays around the edge and block those further out completely.

Page 36

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3k: Lens System with a Stop


from Appendix C, fig L , page 591 of Dim.
Metrol by Dotson

Magnification
Images produced by lenses can be made any size without external power addition, as is required for most
mechanical systems. The addition of external power thus biases measurement systems. Magnification is
achieved only by manipulating the distances from the lens to the object and image, as shown below in Figure
4.3l.

Figure 4.3l: Simple Magnification Scheme


from Appendix C, fig M , page 592 of Dim. Metrol by Dotson

The image is treated as if it was a real object, and is the basis of all compound optical systems. The eyepiece is
the magnifier. In the figure, A is an example of a telescope. The image is inverted with respect to the object,
and is seen by the eye as smaller than the actual object. In the figure, B is an example of a microscope. There

Page 37

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

is no inversion, and the image is seen as larger than the original object by the eye. Simple magnifiers have
many uses.
Perception of size depends on the angle at which an object subtends the eye. Figure 4.3m below demonstrates
three different cases. View A shows the unaided eye. The object at the far left is either small or is far away
based on the small angle that the object subtends. In view B, the use of a lens widens the perceived subtended
angle. The object is still the same size as in A, but appears larger. In view C, if the same lens in B is brought
closer to the eye, the object appears even larger.

Figure 4.3m: Perceiving Size Distance I


from Appendix C, fig N , page 593 of Dim. Metrol by Dotson
The subtended angle also changes our perception of distance, as shown in Figure 4.3n below. In the top view,
objects A and B lie in nearly the same plane. Since the subtending angle from B is wider than that of A, we can
tell it is larger than A. In the lower view, if we know in advance that C and D are really the same size, we know
that D is further away due to the smaller subtended angle.

Figure 4.3n: Perceiving Size Distance II


from Appendix C, fig O , page 593 of Dim. Metrol by Dotson

Page 38

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The closer one gets to an object, the larger the angle becomes, and the greater detail we detect. However, we
will eventually reach the near point of the eye. Any further shortening of distance will actually cause a loss of
definition. The near point is the spot where maximum size can be viewed by an unaided eye, which for most
adults is about 10 inches for distinct vision over an extended time period. A magnifier, as shown in Figure 4.3o
below, enables the eye to remain relaxed while the object is viewed from a shorter distance. Although the
object in the figure is only x distance from the lens, the lens spreads out its angle to the eye, and makes it appear
larger as if it were y distance from the lens.

Figure 4.3o: Perceiving Size Distance II


from Appendix C, fig P , page 594 of Dim. Metrol by Dotson

Using the same type of x and y distance relationships from Figure 4.3o, look at Figure 4.3p below. In this
figure, the object is really at distance x (from 4.3o). It is closer than the near point. If it could be seen, it would
have size A (from 4.3o). The lens spreads out the rays so they appear to be coming from B at distance y (still
from 4.3o). If x was 3 and y 15 in this case, the magnification is said to be 5X or 5-power (15/3 = 5).
Technically-speaking, the magnifying power is defined by the ratio between the angle subtended by the
image at the eye and the angle subtended by the object if viewed directly at the nearest distance of
distinct vision. In Figure 4.3p, as the size of the object decreases, it becomes necessary to use greater
magnification for the same degree of clarity. At greater than 5X magnification, multiple element lens systems
are required as shown. This is true as one loses definition with a single lens over 5X.

Page 39

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3p: Multiple Element Lens Systems for Different Magnification


Powers from Appendix C, fig Q , page 594 of Dim. Metrol by Dotson

Note in the above figure that distances of eye to lens and lens to object decrease as the magnification increases.
Two practical problems are created because of the distance decrease. Sometimes a simple magnifier cant be
used because the workpiece prevents the head getting close enough. Also, the magnifier and user cast such a
shadow over the object that it is too dark to see (not enough light is available). In this case, the magnifier
requires built-in illumination.
In order to really measure sizes with simple magnifiers, one requires a reticle. A reticle (or sometimes called
graticule) is a transparent pane of glass that has scales, angles, contours or radii inscribed on it. Some examples
are shown below in Figure 4.3q.

Page 40

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3q: Reticle Examples from Appendix C, fig T ,


page 595 of Dim. Metrol by Dotson

Microscope Principles
Based on the above optical principles, we can now look at a microscope in greater detail. A few schematics of
the average compound light microscope can be found in Figures 4.3r and 4.3s below.

Figure 4.3r: Typical Light Microscope


From
http://science.howstuffworks.com/lightmicroscope5.htm

Page 41

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3s: Light Microscope Diagram


From http://elchem.kaist.ac.kr/vt/chemed/imaging/lmicrosc.htm

As seen in the figure above, there are two stages of magnification the eyepiece and objective lens. Another
diagram is shown in Figure 4.3t below.

Figure 4.3t: Microscope Cross-section


from fig 18-3, page 548 of Dim. Metrol by Dotson

Page 42

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

In the above figure, a light source, either mirror or light bulb, would be located to the left of the work W. A
rheostat is often supplied for bulb systems to adjust the amount of light desired. The objective lens O is just to
the right of the work W. FO is the focal point of the objective lens. M is the eyepiece lens, or ocular lens, and
M is the stop. A real image is formed at I1 and the virtual image is formed at I2. As seen above in Figure 4.3r,
the microscope tube holds the eyepiece at the proper distance from the objective lens and blocks out stray light.
There are generally coarse and fine-focus adjustment knobs. The coarse-focus adjust brings the object into the
focal plane of the objective lens and the fine-focus makes final adjustments to focus the image. The work is
generally held on a stage.
The effective magnification of the system is the product of the eyepiece magnification (usually 10X) and
objective lens magnification. It is generally recommended to keep both eyes open when viewing. This often
takes some practice, but there are also binocular or stereoscopic styles now available. Recommended
magnifications for most work can be found below in Figure 4.3u.

Size of Detail

Magnification

2.54 mm 254 m

0.10 0.010

6.6X 30X

254 m 25.4 m

0.010 0.001

20X 60X

25.4 m 12.7 m

0.001 0.0005

40X 90X

12.7 m 254 nm

0.0005 0.00001

80X 150X

Figure 4.3u: Magnification Recommendations

The Measuring Microscope


Because measuring with a microscope is direct, we can measure to a high degree of reliability. The reticle, as
mentioned earlier, is used as the standard, but it also limits the microscope to the degree of the reticle. Some
additional reticles are simply crosshairs, and can be seen below in Figure 4.3v.

Figure 4.3v: Specialized Reticles from fig 18-6,


page 551 of Dim. Metrol by Dotson
Page 43

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

In the figure above, example A is most common. The lines are etched on the glass and are generally between
.0002 and .0003 thick. With this style, one can make settings to 50 in. using low magnifications (objective
2.5X 5X and eyepiece 10X: total 25-50X). For greater reliability, it is recommended to use a reticle like B
with broken lines so you can center on the line. A bifilar reticle like C is useful for picking up scale lines. The
parallel lines are spaced slightly wider than the scale lines so one can make precise settings. For the most
precise work involving photographic plates, electronic workpieces, and other non-metal workpieces, a reticle
like D is used. Two lines are at angles to each other (generally 30o). The eye easily establishes symmetry
among the four spaces created when the reticle is positioned over the fine broken line.
Micrometer eyepieces are available with reticles in the principle image, and can read vernier scales. The scale
image becomes superimposed over the workpiece. There are axis adjustors for traveling along a coordinate.
Examples are found below in Figure 4.3w. These eyepieces allow one to move the scale from one part feature
to another. The displacement between the features is read both by the micrometer and the reticle. The top
example is for precise angular measurements and the bottom for precise linear measurements.

Figure 4.3w: Micrometer Eyepieces from fig 18-7, page 552 of


Dim. Metrol by Dotson

Page 44

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Unlike biological uses, where one can slice thinner specimens and shine light through the work, stage
illumination can become a problem for solid parts being viewed. As magnification is increased, the distance
between the objective lens and workpiece decreases, which can cause a problem when trying to light a part
adequately to see the feature desired. The distribution of light reflected from a workpiece is reduced by the
square of the increase in magnification. A part feature clearly visible at 10X may be invisible at 50X. There
are two actions one can take:
1.
2.

Project focused beams of light onto the part of the workpiece being examined (may be inadequate at
the highest possible magnifications)
Break into the optical path and inject light toward the workpiece.

An example of a measuring microscope can be seen in Figure 4.3x below.

Page 45

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3x: A Measuring Microscope, the


Mitutoyo TM-505
From
http://www.creativedevices.com/measurin.htm

Microscopes used for surface topology have magnifications up to 400X where ones for metallurgical
applications can reach 2000X in order to see crystalline structures. Most coordinate measuring machines
(CMM) have adaptors for microscopes to locate part features (center and edge finders).
Parts of the measuring microscope pictured above can be found in Figure 4.3y below.

Page 46

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3y: Mitutoyo Measuring Microscope

Using the Measuring Microscope


By placing a small workpiece on the center of the stage glass, and then bringing into focus, one can make
adjustments before doing work. There is a detailed manual supplied with each unit, and should be followed
according to its individual instructions. After locating on a workpiece edge, one must adjust the reticle by
turning the angle dial so that it corresponds to movement by the X-Y axes. One must usually also zero the
vernier scale in order to measure distances. There is generally also a procedure for adjusting the crosshairs of
a reticle by rotating 180o after finding an edge. General specs require that the reticle center remains within 3
m (about 0.0001) of the center of the crosshairs during rotation. There is usually more than one reticle
supplied, and a procedure supplied for switching reticles. As discussed and shown earlier, different reticles
have different usages. Special template reticles are available for purposes such as inspecting screw threads or
involute gear teeth. A diopter adjustment is provided for sharp viewing (focusing) of the reticle itself. Height
measurements are even possible through use of certain attachments and indicators.
Workpieces could be positioned and clamped (optional) to the stage to prevent movement during use. There are
also means to orient the workpiece for viewing by adjusting the workpiece or the reticle. This allows for better
measurements when moving the X or Y axis. Readings during movement are generally obtained from the
micrometer head. There are means for taking angular measurements as well. There are also various
illumination modes supplied with this type of unit. Transmitted illumination is generally used for contour
images. Sometimes filters are provided. Reflected illumination is used for surface features. Angle and
orientation usually must be adjusted for optimum viewing. There often is a switch provided for simultaneous
usage.

Preparation for Application Exercise


For the upcoming application exercise, one may find that a Quadra-Chek QC200 unit is linked to the measuring
microscope. This is a useful piece of equipment, as it will enable one to be able to actually read displacement
of the axes and provide a nice digital readout of the resulting measurement. The dials for the axes on the
measuring microscope have no incrementation, and therefore one cannot tell the distance traveled.

Page 47

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Make sure the Quadra-Chek cables are linked to both axes, the unit is plugged in, and the on/off (I/O) switch on
the rear of the unit (just above the power cord) is switched on. There is a red on/off switch on the lower righthand front of the unit as well. To make things simpler, one can just use the Current Position indication on the
main screen. Each X and Y figure can be zeroed between measurements by pressing the corresponding button
just to the right of the screen.
Turn on the microscope, and make sure the transmitted illumination light is lit. There is also a reflected
illumination light, which is useful for some of the measurements, but will generally cause shadows for reading
outside edges. Once a part has been positioned on the stage, use the crosshair reticle (seen through the
eyepiece) and an object (like a pencil tip) to square up the part with respect to the X and Y axes as best as
possible. The reticle should be zeroed with respect to its vernier. One can determine squareness by following
an edge of the part while incrementing either of the axes.
Use the coarse adjustment to bring the part into focus, and then fine-tune with the fine adjust. The 2X objective
lens should be in place, as well as the 15X eyepiece lens. This will yield 30X overall.

Page 48

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.3.3 Application: Using the Measuring Microscope


Materials: Mitutoyo TM-505 Measuring Microscope linked to Quadra-Chek QC200
Parts: Electronic part Dual In-Line Package LM741CN, plus part drawing showing features to inspect
List features to be measured on attached part drawing and record drawing specs for each. Using the measuring
microscope, measure each of the features and record actual measurements found. Answer questions below.

Part Feature
1. body short width
2. body long width
3. distance to center of last
prong from end (4 plcs.)
4. prong ctr-ctr (6 plcs.)
5. end prong main widths (4
plcs.)
6. center prong main widths
(4 plcs.)
7. prong set ctr-ctr spread
(meas. 4 plcs.)
8. prong small widths (8
plcs.)
9. body thickness (1 meas.)

Specified Dimension (&


Tolerance)
.250 +/- .005
.373 - .400
.045 +/- .015

Actual Measurement

.100 +/- .010


.039 ref
.050 ref
.325 +.040/-.015
.018 +/- .010
.130 +/- .005

For 1-6, place part on its back, with prongs up, use only transmitted illumination light
For 7-9, may find it easier to use reflected illumination light as well
For 8, place part on its side, re-align
For 9, lay part on an end

Questions
1.) Were you able to take readings with the supplied instruments? Describe any difficulty you had.
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
2.) According to your measurements, is this part within drawing specification?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________

Page 49

4.0 Coordinate Measuring Machines and Inspection

Placeholder for part drawing

Page 50

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.3.4 Dialog
Optical Comparator Introduction
Optical comparators actually have about 5 advantages over microscopes:
1.
2.
3.
4.
5.

The field of view may be much larger, so larger areas may be examined in one setting.
Because the size of the area being examined is larger, more than one person can be involved in
measurement and analysis. Pointing out and explaining features is less complicated, and is prone to
less misunderstanding.
Measurements can be made directly on the screen using ordinary drafting instruments.
The photographic adaptor, if used, is less complicated than that for a microscope. A screen could be
photographed with an ordinary camera.
There is less eyestrain than even with binocular microscope, which reduces fatigue.

One finds optical comparators used fairly widely in industry, where microscopes are rare. Unfortunately, bulk
of a comparator is an issue. To project an object of width A at a magnification power of B, one requires a
screen AxB wide. Comparators also cost more especially at the low end. One example of an optical
comparator is shown below in Figure 4.3z.

Figure 4.3z: An Optical Comparator


From
http://www.starrett.com/pages/743_hf75
0.cfm

Page 51

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The basic optical system is shown below in Figure 4.3aa

Figure 4.3aa: Optical Comparator Schematic


From figure 18-14, p 556 of Dim Metr by Dotson

A condensing lens (C) images the light source (S) in the plane of the object being viewed (AB). The image of
the light must be larger than the object. A projection lens (P) receives the light that passes around or through
the object and forms a real but inverted image (A'B') on the projection screen (G). The ratio of AB to A'B' is
the magnification and is determined by the focal length of the projection lens (P).
There are two types of projection screens translucent and opaque. A translucent screen displays magnified
images on the back side of a screen. An opaque screen is more like a movie screen where the image is
projected from the front. Opaque screens have the highest accuracy, but are used in only the largest systems.
Translucent screens are used in all others. When using a translucent screen, one must not vary the view by any
more than 20o from normal. Because the amount of light transmitted is greatest in the line to the back of the
screen, the amount falls off rapidly as the eye moves side to side. Beyond this angle, one does not see portions
of the screen in the same relative relationship. Most screens come with X and Y axis centerlines. Screens are
also usually graduated in degrees of arc when not equipped with bezels for screen rotation.

Brief History
The comparator was invented in 1920, and was originally called a shadowgraph. James Hartness developed the
optical comparator out of a need to standardize screw thread sizes. Serving on the National Screw Thread
Commission, Hartness set out to create a machine that could measure the complex curves of a screw. This first
optical comparator projected the shadow of the object onto a screen a few feet away, and employees could then
measure the shadow image of the screw thread against draft designs. By the end of the 1920s, an optical
comparator was a single integrated unit able to fit on a table top.
In the 1940s, it became obvious that optical comparators were indispensable tools of the design and production
process. As the United States became involved in World War II, the defense industry used optical comparators
for weapons and special equipment. This and a booming automobile industry helped the optical comparator
become a staple in part measurement by the 1950s.
Automatic edge detection was added in the 1960s, and the 1970s brought about digital readout capabilities.
About 15 years ago, the issue of backlash was resolved when manufacturers started using linear scales. Before
then, measuring accuracy relied upon threads on a rotary encoder (using a hand crank to turn and move the
stage). This problem was eliminated when operators switched from rotary encoders to linear encoders.
In the last decade or so, changes to optical comparators have focused on additional functionality, improvement
of the quality of imaging, creating fully automated machines and integrating computer technology into the

Page 52

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

system. Software increases the capabilities of the optical comparator and makes it easier for operators to use
special features of the system, such as transferring selected points of measurement into a program that can
directly compare it to CAD file data. Through the use of three-axis touch probing or a laser non-contact device,
the optical comparator is now capable of measurement in the Z axis.

Profile Projection Limitations


The greatest limitation is that the feature being examined lies in the plane of sharpness. This is the part of the
screen that is normal to the optical axis and is generally 0.005 to 0.010 thick. This can cause problems for
many workpieces. Threads can be a problem because they are helixes; their profiles lie at an angle to the
workpiece. This problem, though, is on a scale of 0.0001. Reflection from a workpiece can pose a problem in
profile projection. Thin workpieces are not a problem, and parts with beveled edges often generate no
problems. If a workpiece is long in the optical axis, a smooth surface can result in specular reflection. If that
light falls within the dark area caused by the part itself, the edge may become difficult to determine precisely.
If a face of an object, like the rectangular block below in Figure 4.3ab, is parallel to the optical axis, one gets a
clear image of only the nearest the projection lens.

Figure 4.3ab: Potential Reduction in Clarity


From figure 18-17, p 558 of Dim Metr by Dotson

In the figure, P is the focal plane of the projection lens. In the top view, block A has a face in that plane. Its
top edge is defined by light ray ab from the light source. All lower rays are dark on the screen and all higher
ones are bright. Rays at a higher angle, like cd, will be reflected, and the reflected light will show in the bright
area of the screen. In B, the opposite face of the block is in the focal plane. Ray ab still defines the top edge,
but the reflected light from c now forms an image on the screen as if it were ray xd. The reflection will appear
to have originated within the block, adding light to the dark part of the image. This is also known as the wall
effect.

Page 53

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The most common limitation is field of view. The screen size limits the amount of the workpiece one can show
at any one time. The greater the magnification, the less can be displayed within the screens borders. The
amount of a workpiece one can show is determined by the screen diameter divided by the magnification. This
is illustrated in Figure 4.3ac below. The circles in the figure are the maximum areas showable given the
magnifications.

Figure 4.3ac: Field of View Guide


From figure 18-19, p 559 of Dim Metr by Dotson

Applications
Use Direct Measurement whenever possible. It is more reliable especially when the following is true:
1.
2.
3.

The workpiece is suitable for profile comparison (from limitation section)


Can display the longest feature of the workpiece entirely on the screen
Tolerances are within the range of the equipment

A drafting scale could be used based on the magnification, or devices such as a toolroom chart gage exist. A
chart gage example is shown in Figure 4.3ad below. With a chart gage, one can compare arcs and measuring
angles. With direct measurement, one can check things like squareness and detect small burrs or dust particles.
Direct measurement is generally the least prone to errors. These are limited to optical distortion or the error
associated with viewing one part of the workpiece at one end of the screen and the other part of the same piece
at the other end of the screen. The rest is usually operator error, or error in reading. Certain chart-gage

Page 54

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

materials like vellum can be subject to error because of extreme humidity changes, but temperature changes
rarely affect results much.

Figure 4.3ad: Chart Gage Example From figure 18-23, p 561 of Dim Metr by Dotson

Measurement by Translation occurs when a workpiece is indexed linearly (no rotation involved), and is
generally done when a workpiece is longer than the screen width. One would be watching as a feature just
touches a crosshair and then measuring the distance one has moved the work table. This is typically along
either the X or Y axis, but some machines may allow for Z measurements as well. One must remember to
always take the measurement from the same side of the crosshair as the crosshair has a width to it (although
small). Accuracy for this method occur due to operator errors as mentioned previously, as well as due to the
accuracy of the worktable motion, or ones ability to align the part feature to a reference line on the screen.
One can perform Measurement by Comparison using chart-gages or precise templates. Angles do not change
with degree of magnification. Many comparators are equipped with a bezel, calibrated in degrees, which one
can use to rotate the screen. Verniers on the bezels can make measurement down to 5 minutes of arc possible.
One can also check contours and compare to a master. Special probes are available to be able to trace contours
that cannot be seen. It could be hidden behind another surface. A master template may be used with a tolerance
zone, and set up similarly to Figure 4.3ae below.

Page 55

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.3ae: Tracer Application From figure 18-32 and 33, p 557 of Dim Metr by Dotson

Page 56

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Using the Optical Comparator


An example of a smaller, desk-top optical comparator is shown below in Figure 4.3af.

Figure 4.3af: Micro-Vu 500HP


Optical Comparator
From
http://www.microvu.com/500hp.htm

This particular unit has a measuring volume of 6 x 2.5 x 1.25 and can magnify from 10X to 50X, or 100X
telecentric. Various templates are available from the manufacturer, as are V-block/centers and vertical fixtures.
Before operating, one must decide which magnification lens is to be used. As mentioned earlier, this would be
screen size divided by object size, and then use the lens available just below the calculated answer. For
instance, if the screen diameter was 12, and the object to be viewed was 0.5 wide, then dividing would yield
24. If 10X, 20X, and 30X lenses were available, one could select the 20X and see the entire object on the
screen. If using a higher power magnification, one must then increment the table to view portions of the object
at a time. The manual should provide instructions for changing lenses.
If the stage is to be incremented to view the object, one should set up the dial indicators provided to measure
incrementation (refer to manual). Select a screen pattern (template) or overlay chart that best fits the
measurement application (radii, angles, crosshairs, etc.). Select the proper fixturing that best supports the part
in order to view desired features. Once the part is in place, turn on the lamp and experiment with viewing on
the screen. Ensure that no undesired shading takes place. Review the manual to become familiar with features
of the machine including rotating the screen and reading the vernier to measure angles.

Page 57

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.3.5 Application: Using the Optical Comparator


Materials: Micro-Vu 500HP Optical Comparator, fixturing for part
Parts: Metronics QC Quickie, plus part drawing showing features to inspect
List features to be measured on attached part drawing and record drawing specs for each. Using the measuring
microscope, measure each of the features and record actual measurements found. Answer questions below.

Part Feature
1. small slot lengths (do 2)
2. small slot widths (do 2)
3. large circle dia.
4. ctr. lge. circle to edge
5. small circle dia.
6. 1 of dual-large circle dia.

Specified Dimension (&


Tolerance)
1.043-.658+2(.015) =
.415
.030
R 0.25 x 2 = .50
3.500-3.000 = .500
R .094 x 2 = .188
R .188 x 2 = .376

Actual Measurement

For #6, also use glass screen reticle to match


Tolerance guide for part on drawing
Questions
1.) Were you able to take readings with the supplied instruments? Describe any difficulty you had.
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
2.) According to your measurements, is this part within drawing specification? What power is this unit set at?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________

Page 58

4.0 Coordinate Measuring Machines and Inspection

Holder for QC Quickie part drawing

Page 59

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Bibliography
Dotson, Connie, Rodger Harlow, and Richard Thompson. Fundamentals of Dimensional Metrology, 4th Edition.
Thomson Delmar Learning, 2003.

Page 60

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.4 Statistical Process Control (SPC)


4.4.1 Exploration
There is no exploration exercise for this module, but one should have completed module 1.4 Statistics and
Error Analysis, before proceeding with this module.

4.4.2 Dialog
Overview
The basic definition of Statistical Process Control (SPC) is the application of statistical methods to identify and
control the special cause of variation in a process. SPC involves using statistical techniques to measure and
analyze the variation in processes. Most often used for manufacturing processes, the intent of SPC is to monitor
product quality and maintain processes to fixed targets. Statistical quality control refers to using statistical
techniques for measuring and improving the quality of processes and includes SPC in addition to other
techniques, such as sampling plans, experimental design, variation reduction, process capability analysis, and
process improvement plans.
SPC is used to monitor the consistency of processes used to manufacture a product as designed. It aims to get
and keep processes under control. No matter how good or bad the design, SPC can ensure that the product is
being manufactured as designed and intended. Thus, SPC will not improve a poorly designed product's
reliability, but can be used to maintain the consistency of how the product is made and, therefore, of the
manufactured product itself and its as-designed reliability. With SPC, the process is monitored through
sampling. Considering the results of the sample, adjustments are made to the process before the process is able
to produce defective parts.
A primary tool used for SPC is the control chart, a graphical representation of certain descriptive statistics for
specific quantitative measurements of the manufacturing process. These descriptive statistics are displayed in
the control chart in comparison to their "in-control" sampling distributions. The comparison detects any
unusual variation in the manufacturing process, which could indicate a problem with the process. Several
different descriptive statistics can be used in control charts and there are several different types of control charts
that can test for different causes, such as how quickly major vs. minor shifts in process means are detected.
Control charts are also used with product measurements to analyze process capability and for continuous
process improvement efforts.
The concept of process variability forms the heart of statistical process control. For example, if a basketball
player shot free throws in practice, and the player shot 100 free throws every day, the player would not get
exactly the same number of baskets each day. Some days the player would get 84 of 100, some days 67 of 100,
some days 77 of 100, and so on. All processes have this kind of variation or variability.
This process variation can be partitioned into two components. Natural process variation, frequently called
common cause or system variation, is the naturally occurring fluctuation or variation inherent in all processes
(chance, random or unknown). In the case of the basketball player, this variation would fluctuate around the
player's long-run percentage of free throws made. Special cause variation is typically caused by some problem
or extraordinary occurrence in the system (assignable). In the case of the basketball player, a hand injury might
cause the player to miss a larger than usual number of free throws on a particular day. In a production process,
it is important to know the difference between the two types because the remedies for the two are very different.
One wants to target quality improvement efforts properly and avoid wasted time, cost and effort.

Page 61

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

SPC does not refer to a particular technique, algorithm or procedure. It is an optimization philosophy
concerned with continuous process improvements, using a collection of (statistical) tools for
data and process analysis, and then making inferences about process behavior. It is a decision making tool, and
is a key component of Total Quality initiatives. Ultimately, SPC seeks to maximize profit by:

improving product quality


improving productivity
streamlining processes
reducing waste
reducing emissions
improving customer service, etc.

A Brief History
Quality Control has been with us for a long time. It is safe to say that when manufacturing began and
competition accompanied manufacturing, consumers would compare and choose the most attractive product
(barring a monopoly of course). If manufacturer A discovered that manufacturer B's profits soared, the former
tried to improve his/her offerings, probably by improving the quality of the output, and/or lowering the price.
Improvement of quality did not necessarily stop with the product - but also included the process used for
making the product. The process was held in high esteem, as manifested by the medieval guilds of the middle
ages. These guilds mandated long periods of training for apprentices, and those who were aiming to become
master craftsmen had to demonstrate evidence of their ability. Such procedures were, in general, aimed at the
maintenance and improvement of the quality of the process.
It was not until the advent of the mass production of products that the reproducibility of the size or shape of a
product became a quality issue. Quality, particularly the dimensions of component parts, became a very serious
issue because no longer were the parts hand-built and individually fitted until the product worked. Now, the
mass-produced part had to function properly in every product built. Quality was obtained by inspecting each
part and passing only those that met specifications. In modern times we have professional societies,
governmental regulatory bodies such as the Food and Drug Administration, factory inspection, etc., aimed at
assuring the quality of products sold to consumers.
Statistical quality control is comparatively new. The science of statistics itself goes back only two to three
centuries. Its greatest developments have taken place during the 20th century. The earlier applications were
made in astronomy and physics, and in the biological and social sciences. It was not until the 1920s that
statistical theory began to be applied effectively to quality control as a result of the development of sampling
theory. The first to apply the newly discovered statistical methods to the problem of quality control was Dr.
Walter Shewhart of the Bell Telephone Laboratories. He issued a memorandum on May 16, 1924 that featured
a sketch of a modern control chart.
Shewhart kept improving and working on this scheme, and in 1931 he published a book on statistical quality
control titled "Economic Control of Quality of Manufactured Product". This book set the tone for subsequent
applications of statistical methods to process control. Two other Bell Labs statisticians spearheaded efforts in
applying statistical theory to sampling inspection. The work of these three pioneers constitutes much of what
nowadays comprises the theory of statistical quality and control.
In the 1950s, Dr. Edwards Deming introduced the concept to Japan after being ignored in the U.S. The
Japanese adopted the concept and began incorporating SPC as a part of the Total Quality Management (TQM)
philosophy. Toyota became a poster child for incorporating SPC into their overall manufacturing and quality
system. By the early 1980s, Japans dramatic improvements in quality and cost of product caught the attention
of U.S. companies. Ford Motor Company adopted the philosophy and many others followed.

Page 62

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

A Review of Statistical Concepts and Causes of Variation / Randomness


When we conduct SPC, we are using basic concepts from probability and statistics. The purpose of this module
is not to teach statistics concepts. We want the student to be able to use the basic techniques, but to fully
understand why and how they work, one should take formalized calculus, and probability and statistics courses.
In many cases, technicians can get by with information that is presented here, and should be able to use the
techniques. Trust that there is a sound and proven mathematical basis for the concepts used here.
If we were to look at the entire population of parts produced and test each and every one for attributes, we
would have extremely accurate data. This would, however, be extremely time consuming and expensive. We
must rely on certain statistical concepts to provide us with reliable answers based on data from a sampling (a
smaller portion) of the entire population. This will be less costly, and if our sample is truly random, we should
get some reliable data. Also, lets look at some possible issues that would cause some randomness to produced
part sizes.
Lets say that part XYZ is a special connecting pin with a main shaft diameter. If we looked at a total
population of part XYZ ever produced, we could find out a few things. Lets say we are only looking at the one
critical attribute of part XYZ the main shaft average diameter. If there was some variation to the process that
produced the diameter, and we were going to measure to the nearest 0.001, we might see some variation from
part to part. Perhaps the part is turned in a lathe. There are some variables which would cause the diameter to
vary from part to part. The ball screws that control each axis of movement would have some degree of
clearances between meshing parts. Also, the servo motors and positional scales that tell each axis where to be
could have a small degree of variation, as well as natural voltage fluctuations that could cause some variation in
each electrical signal. Each time the axes that moved the cutting tool into position to turn the part could arrive
at just a slightly different position each time. This would cause one part to have a slightly different diameter
than the previous part, or the next. The cutting tool wears over the course of time. As a slight bit of material
wears off the tool over time, and one kept the same tool in the tool holder, one would expect the part to very
slowly get larger over time, unless an operator measured periodic parts and made an adjustment to compensate
for the tool wear. The quality and amount of cooling cutting fluid used, and how much is directed at the cutting
zone, will affect the rate and degree of tool wear.
Machine parts expand and contract as the operating temperature changes. Metal expands when heated and
contracts when cooled. Different materials have different rates of change of expansion/contraction over
temperature ranges. If a machine was turned off over night, the machine parts would be at their smallest the
first thing in the morning. Clearances would then be at their greatest. If the temperature in the building were
not controlled, that would have an effect as well. They would be smaller in colder winter months than in the
hotter summer months. Temperatures vary each day. Once the machine was started in the morning, slowly but
surely, the machine and its parts would heat up, so clearances between parts would slowly reduce. The amount
and quality of oil used could add variation to the heat during operation. Old or low oil could cause the parts to
heat much higher than the machine manufacturer would recommend. A new machine might have all parts to
specified sizes, and there would be a degree of randomness to each part supplied on the machine. As the
machine aged, those parts would wear based on factors mentioned already. Whether or not maintenance was
performed, or worn parts replaced, would also have an effect on clearances. Your automobile acts in similar
ways. All of these factors, and more, could cause variation from part to part produced.
Part XYZ, according to the design engineer that designed the part, and the assembly it might be used in later,
will have specified some acceptable range that the diameter can be, and still be considered a good part. Lets
say that for cost and design reasons, we are only going to turn the part to size. There will be no subsequent
operations like grinding or polishing which could correct or refine the size, and hence add cost to producing the
part. The engineer has assigned an acceptable tolerance of +/- 0.003 to the diameter of the part. Testing and
experience says that the part will work acceptably in the final assembly with this size range difference. If the

Page 63

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

ideal or nominal part diameter is 0.140, then acceptable parts could be anywhere from 0.137 to 0.143 in
diameter and be deemed a good part. The two previous numbers would be considered the lower and upper
specification (or spec) limits, respectively.
In accordance with past testing, the closer one gets to 0.140, the better the part will perform in the assembly. If
parts are over 0.143, they will not fit in the assembly. This will cause a shutdown of the machine that inserts
the part, or a person will have to spend extra time to reject the part and get a new one, plus the cost of the
defective part produced. A part under 0.137 will fit into the assembly, but a malfunction will occur down the
road that will cause the ultimate assembly to be repaired and parts replaced. This will add to overall cost to the
company that produces the machine, and make some customer very unhappy. The company wants to avoid
these situations at all costs.
One can now have an idea of randomness in a process, and gain a sense of what variation, or variance means. If
one were to look at all parts XYZ ever produced, one could determine the historical mean diameter, or average
arithmetical average. Take all parts diameters, add them up, and divide by the total produced. The mean is
represented by the small Greek letter mu, or . Parts of this nature have been found to fit into a distribution
known a normal distribution, seen below in Figure 4.4a.
Y

Figure 4.4a: A Normal Distribution


From http://www.webenet.com/bellcurve.htm
It is also known as a bell-shaped or bell curve. In theory, if the population is truly random, the mean value will
lie at the center of the X-axis. There will be an equal amount of parts with a diameter greater than the mean and
less than the mean. There will also be the greatest amount of parts produced with a mean or near-mean
diameter, as opposed to other diameters. Note that in theory, the two tails of the curve never actually touch the
X-axis. The number of parts or cases is represented by the Y-axis. Next, we must address what is meant by the
term standard deviation from the mean. The standard deviation is represented by the small Greek letter sigma,
or . Standard deviation tells how spread out numbers are from the average, calculated by taking the square
root of the arithmetic average of the squares of the deviations from the mean in a frequency distribution, or the
square root of the variance. The bell curve above could be of various shapes. It could be taller and less wide,
when the standard deviation is small; or it could be shorter and wider, when the standard deviation is greater.
To make the definition above make a little more sense, we will look at a population below and do the
calculations.
Lets say that a total of twenty XYZ parts were ever produced, and the average diameter of each had been
measured. This makes n, or the total population equal to 20. Each diameter was measured to within 0.0001.

Page 64

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The standard deviation formula for a population, as stated above, looks like this in mathematical format. It can
be presented slightly differently in other texts.

=
The raw data and necessary calculations are presented below in Figure 4.4b.

Part # XYZ Diameter Data


Deviation
from
(Avg Dia - )
-0.0028
-0.0019
-0.0022
-0.0007
-0.0005
0.0011
0.0008
0.0001
-0.0008
0.0005
-0.0003
-0.0002
0.0061
0.0015
0.0027
0.0017
0.0034
0.0025
0.0030
0.0038

Variance
((Dev from )2)
0.00000784
0.00000361
0.00000484
0.00000049
0.00000025
0.00000121
0.00000064
0.00000001
0.00000064
0.00000025
0.00000009
0.00000004
0.00003721
0.00000225
0.00000729
0.00000289
0.00001156
0.00000625
0.00000900
0.00001444

Part #
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

Avg Dia
0.1372
0.1381
0.1378
0.1393
0.1395
0.1411
0.1408
0.1401
0.1392
0.1405
0.1397
0.1398
0.1461
0.1415
0.1427
0.1417
0.1434
0.1425
0.1430
0.1438

n
20

Sum
2.8178

Sum
0.00011080

Mean Dia ()
(= 2.8178 / 20)
0.1409

Avg. Variance
(= 0.00011080 / 20)
0.00000554

Std. Deviation
(=Sq. Root (0.00000554))
0.0024

Figure 4.4b: Table of Data for Standard Deviation Calculation


From Excel File Prime Module 4.4

Page 65

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

One can see that the mean is just slightly higher than the nominal spec of 0.140, or 0.1409. The standard
deviation in this case has been found to be 0.0024. Under the theory of the normal distribution, and looking
back at Figure 4.4a, a little over 34% of the population should be between the mean and +1 standard deviation
from the mean, with another 34%+ being between the mean and -1 standard deviation from the mean. Thus
about 68.26% of the population should be found at +/- 1 standard deviation from the mean. In our example, +/1 is any part from 0.1385 (0.1409 - 0.0024) to 0.1433 (0.1409 + 0.0024). In our example, 14 of the 20
parts, or 70%, actually are in that range. Under normal circumstances, and again looking at Figure 4.4a, about
95.4% of the population should be +/- 2 from the mean, or in our case, from 0.1361 to 0.1457. In our
example, 19 of 20 or 95% are in that range. The amount of the population that should theoretically fall within
+/- 3 from the mean is 99.72%, or from 0.1338 to 0.1481. All in our example fall within that range. There
happen to be three parts that exceed the upper spec limit of .0.143, we would eventually like to know why and
what we could do to avoid it, but that isnt the purpose right now.
One other statistical term is the range, or the difference between high and low value. In our example, that range
would be the high of 0.1461 minus the low of 0.1372, or 0.0089. The range will be used later, and is
sometimes substituted for an unknown population standard deviation when all one has is a sample of data.

Six Sigma Quality


If we were producing 1 million parts, and wanted to achieve levels of quality that companies aim at achieving
under the Six Sigma quality system, we would want no more than 3.4 defects per million parts. The
traditional quality paradigm defined a process as capable if the process's natural spread, plus and minus three
sigma, was less than the engineering tolerance. Under the assumption of normality, this translates to a process
yield of 99.73 percent, as mentioned earlier. A later refinement considered the process location as well as its
spread (Cpk, which we will address later) and tightened the minimum acceptable so that the process was at least
four sigma from the nearest engineering requirement.
Motorola was one of the first major U.S. companies to adopt the TQM statistical techniques in its quality
programs. In 1988, Motorola Corp. became one of the first companies to receive the Malcolm Baldrige
National Quality Award. The award strives to identify those excellent firms that are worthy role models for
other businesses. One of Motorola's innovations that attracted a great deal of attention was its Six Sigma
program. Six Sigma is basically a process quality goal. As such, it falls into the category of a process
capability (Cp, which we will address later) technique.
Motorola's Six Sigma asks that processes operate such that the nearest engineering requirement is at least plus
or minus six sigma from the process mean. Their program also applies to attribute data. This is accomplished
by converting the Six Sigma requirement to equivalent conformance levels. One of Motorola's most significant
contributions was to change the discussion of quality from one where quality levels were measured in
percentages (parts per hundred) to a discussion of parts per million or even parts per billion. Motorola correctly
pointed out that modern technology was so complex that old ideas about acceptable quality levels were no
longer acceptable.
One puzzling aspect of the "official" Six Sigma literature is that it states that a process operating at Six Sigma
levels will produce 3.4 parts-per-million non-conformances. However, if a normal distribution table is
consulted (very few go out to six sigma), one finds that the expected non-conformances are 0.002 parts per
million (two parts per billion). The difference occurs because Motorola presumes that the process mean can
drift 1.5 sigma in either direction. The area of a normal distribution beyond 4.5 sigma from the mean is indeed
3.4 parts per million. Because control charts (which we will also get into later) will easily detect any process
shift of this magnitude in a single sample, the 3.4 parts per million represents a very conservative upper bound
on the nonconformance rate.

Page 66

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Back to Basic Statistics - Sampling


The normal distribution, mean, and standard deviation also apply to samples. Samples are just a portion of the
total population, and for cost reasons, are what companies use for inferential information about the entire
population of parts they produce. A mean of a sample has a different symbol, but is calculated just like a
population mean, . The sample mean is designated by the symbol X , called x bar. The sample standard
deviation is the letter S (not like the population ), and the formula is slightly different.

s=
It utilizes the term n-1 in the denominator, as opposed to the n value used for the entire population. Since n-1 <
n, sample standard deviations S will be slightly larger than entire population standard deviations .

Determining Sample Sizes for Statistical Study


When one begins to look at any process, there are a few important considerations. The first is that the process
must be allowed to operate naturally. That means that operators must not make any adjustments to the process
after the initial set-up is made. This allows data from the operation to contain the randomly occurring variations
from part to part that will occur without intervention.
Another important consideration is to determine the proper sample size. If the sample size is too small, it may
not be statistically valid. Although we will not go into detail on statistical theory here, the sample size
compared to the population will yield certain confidence levels about how statistically valid a certain sample
size will be of that population. If too small, the random sample could be misleading. If it is too large, it will
certainly represent the population well, but may be too costly and time-consuming. There are some rather
complex and accurate methods to determine the desired minimum sample size, but we will not go into that
degree of detail here.
It is generally considered desirable to have at least 100 individual samples for an initial study, and those should
perhaps be from about 25 subgroups. As will be discussed in the next section, sample subgroups for given time
periods work best if 4-6 samples for each are taken.
There are Military Standards that deal specifically with proper sample sizes for inspection purposes. They are
quality-related, and the better known are MIL-Q-9858 and MIL-STD-105. ISO and ANSI have similar industry
standards.

Types of Measurements and Control Charts


We can now take a look at the types of Process Control measurements and Chart types used to analyze
processes. Control charts are a vital part of any process control system, and are the graphical comparisons of
measured characteristics against computed control limits. One looks for variation over time. The control limits
are calculated based on the laws of probability, and one looks to take action when trends develop or elements
fall outside the acceptable limits. When attempting to control a process, generally two types of errors are made:
Type I errors occur when one considers a process to be unstable when in fact it is stable, and Type II errors
occur when one considers a process to be stable when in fact it is unstable.

Page 67

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

and R Charts (Variables Control Charts)

These are the most common types of control charts, are also known as variables control charts, and are used
primarily for machine-dominated processes. The two charts are typically produced at the same time, and are
used together for analysis. The X part of the chart is a continuous plot of subgroup averages where the R
chart is a continuous plot of subgroup ranges. A subgroup can consist of 2 to 20 samples (but as previously
mentioned, 4-6 is considered good). These charts are used to establish the operating level and dispersion of a
process without having to measure every part coming off production. These charts serve to avoid the Type I
and II errors mentioned above.
One of the first steps to be taken is to decide what characteristics are to be charted. One does not wish to chart
every dimension for a part for practical reasons. Some features of a part may be experiencing a high defect rate,
and this may well be a good candidate. Some dimensions or characteristics are critical to the function of the
part where others are not. Many part drawings will actually call out critical features. Others will not. If none
are called out, a committee of process people and design people should be formed to decide and highlight the
features that are critical to a part. ISO 9000 guidelines actually specify that critical features should in fact be
highlighted on all part drawings. Once the critical features are chosen, an initial run must be conducted in order
to establish chart parameters.
In order for the charts to be meaningful, rational sampling must be used. Rational samples are groups of
measurements where variation is attributable to only one system of causes. One does not want only common
cause variations in the sample, and at the same time, one does not want contamination by additional special
cause variations. Look at one machine separate from any others. For a machining sample run, it would be wise
to begin after a fresh cutting tool is installed, and end when the tool is known to be worn and normally changed
out. All normal operating parameters should be in the group. If the machine is usually started cold at the
beginning of a shift, that should be part of the sample. Condition of the machine and cutting fluid should be
considered in normal condition. No special operator interventions (adjustments) should be allowed during the
test run. Allow the process to run naturally and get the normal variations. Unusually cold or hot ambient
conditions should be avoided. In our example of part XYZ from earlier, remember that the diameter was
allowed to be 0.140 +/- 0.003 and be in spec. If one knew that typical tool wear was about 0.005, one might
choose to set the machine up to make the first part diameter 0.1375, knowing that as the tool wore, the part
diameter would tend to grow to about 0.1425 when a tool change was required. We would like to try to
produce good parts even during the study.
A randomly-selected sample size of about 100 parts would be considered quite reasonable to establish initial
chart parameters. If 320 parts might be made from beginning to end of a day consisting of 16 hours, and a tool
change is required due to normal tool wear, one might choose to gather a total of 96 parts (pretty close to the
100 recommended), which would be 6 per hour. One should group parts produced by operating hour, and then
pick 6 consecutive samples from the middle of each time grouping. Staying with a rule should ensure
randomness in the overall initial sample, as well as for future ongoing samples. For future ongoing sampling
and charting, one might select an hourly sample of 5 consecutive parts for charting.
The next step is to construct a basic chart. An example of an

Page 68

X chart is shown below in Figure 4.4c.

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Y Dia.

X Time

Figure 4.4c: An X Chart


From http://www.statlets.com/x-bar_and_r_charts1.htm
Our figure doesnt relate to part XYZ per se, but it is an example. The item being measured is on the Y-axis.

X , or the grand average (discussed later), should be at the central point of the Y-axis. There should be evenlyspaced scale divisions above and below this point. The increments should be wide enough to show significant
changes in the data, and the scale should extend above and below the central point to allow for any expected
variation in size. As a rule of thumb, the scale should extend 20% beyond any element that would be put on the
graph, such as the control limits that we will cover shortly. The X-axis should represent time hours, days or
weeks for example. In the figure, a consecutively numbered subgroup # is shown, but it could easily be the
hour of the day. The scale divisions would equate to the frequency of sampling, start at the left, and have
enough room for a collection period perhaps a work shift. If continued charting demonstrated good process
stability, the collection frequency could change. In the beginning, lets say collection was hourly. In the future,
we could collect data every 2 hours if we wished.
Being used hand-in-hand with the

chart is the R chart. An example is shown below in Figure 4.4d.

Figure 4.4d: An R Chart


From http://www.statlets.com/x-bar_and_r_charts1.htm
Page 69

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

The R chart shows the ranges of data for each subgroup. R is just the symbol for subgroup ranges, and is the
difference between the highest and lowest value of the subgroup. The R chart is placed just below the X chart
and has exactly the same corresponding time intervals and spacing. For the Y-axis, scaling should also be
similar in spacing, with zero at the intersection of the X-axis, the expected range at the midpoint up the Y-axis,
and the scale extending about 40% beyond any expected element. The midpoint value on the Y-axis is known
as

(R bar).

One thing to keep in mind is that one does not do an initial sample, and the X plus other statistics will
necessarily remain the same forever. They will be used for some time, but could be recalculated at some point
in the future if another control study is deemed necessary. Some process variables could change which would
prompt a reevaluation.

Along with charts, a data table must be kept for doing calculations. Today, there are many software packages
available for doing this, and they will perform the calculations automatically. One could also do it with the
aforementioned table and a calculator. Figure 4.4e below shows an example for a data table.

Figure 4.4e: Control Chart Data Table Example


From http://www.carillontech.com/Control_Chart/XBar/Bar&RCalc.htm
From the initial sampling of data, we have a multi-step process to be followed. Step #1 is to calculate averages
every hour, or all of the X s. In the figure above, samples were retrieved hourly, and 5 samples were gathered
each time. Each group was summed, and then divided by the hourly sample size, n = 5. For each subgroup, a
range was also calculated as step #2, where each subgroup range is simply the difference between the largest
and smallest value in each subgroup. Step #3 is to calculate the grand average, or X . It will often appear on
the data table, but is not in the figure shown. In the figure, it would simply be the sum of the 8 AVERAGES
divided by 8 (8 being the number of subgroups or N). This is the arithmetic mean of all the sample averages

and is an estimate of the process or population mean . This becomes the centerline of the
Step #4 is to calculate the average of the subgroup ranges, or

Page 70

R.

X control chart.

It is the arithmetic average of the subgroup

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

ranges (in this case, the sum of the 8 ranges divided by the number of subgroups (N=8)) and becomes the
centerline of the R control chart.
Step #5 is to calculate the upper and lower control limits (UCL and LCL) for the X chart. These control limits
are statistically based, and are a measure of how capable a process actually is. They are based on the
previously-mentioned +/- 3 standard deviations (+/- 3 ) from the mean. Basically, 99.7% of occurrences
should lie within these limits. Anything outside these limits would be very rare, and should therefore be
attributable to some special-cause event. The control limits differ from the specification limits found on most
part drawings, and specification limits are never used in control charting. Design engineers create spec limits,
which are what the designers want the process to be capable of. If for some reason the control limits would
exceed the specification limits, we are told something very important. We had best find some other way of
producing the parts (or some other decision)!
First we must calculate the UCL for the X chart. Without going into detailed statistical theory here, there are
some factors found in the Figure 4.4f table below which can be used in the following formulas, and are equation
shortcuts based on theoretical and n-1 values for normal probability distributions. They also only apply when
n is 10 or less. There are other formulas to be used when n > 10.
UCL ( X ) = X + (A2 x

R)
( A2 values are based on the subgroup size n selected )

LCL ( X ) = X - (A2 x

R)

Next we do the same for the R chart.


UCL (R) = D4 x

R
( D3 & D4 values are based on the subgroup size n selected )

LCL (R) = D3 x

A2

D3

D4

1.880

3.267

1.023

2.575

0.729

2.282

0.577

2.115

0.483

2.004

0.419

0.076

1.924

0.373

0.136

1.864

0.337

0.184

1.816

10

0.308

0.223

1.777

Figure 4.4f: Factors for Calculating Limits on X and R Charts


From http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc321.htm

Page 71

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Step #6 is to actually construct the initial charts. When constructing the charts, follow these guidelines:

R ) with a heavy solid line.

Denote the centerline of each chart ( X and

Plot the X and R values as solid dots. Connecting the dots will assist in seeing patterns.
Denote the UCL and LCL on each chart with a heavy dashed line.

Write the specific numerical values for the UCL and LCL, as well as X and R , on each chart.
Circle any points that indicate the presence of special causes, such as points falling outside the control
limits.

Step #7 is to identify any special causes acting on the initial data. If any points lie outside the control limits on
the original charts, the special causes must be researched, identified, eliminated, and documented as step #8.
More data can be added, but once these special causes have been eliminated, the offending data should be
removed, and all parameters should be recalculated and charted. Points outside the limits would cause the
process to be considered out of control. Only once all data is again within the normal statistical limits would
the process again be considered in control or stable. Continued examination of data patterns help to indicate
further opportunities for quality and productivity improvements. When examining charts, it is recommended to
begin with the R chart. Get it under statistical control first. As shown in the above UCL/LCL equations, the
limits of the

R.

chart depend on the magnitude of the common cause variation of the process, as measured by

Any points on the R chart that are initially out of control (showing special causes present) will inflate the

limits on the

chart.

Page 72

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.4.3 Application: Creating Initial Data Table plus X and R Control Charts
Materials: Calculator, pencil, straight edge, and attached table / rough chart
You will take some data from an initial SPC run for the main diameter of previously-mentioned part XYZ. The
run was conducted for two 8-hour shifts one day and the first shift the next day. Five (5) consecutive parts were
gathered on the 1/2-hour over a total run period of 24 hours. This gave you a total sample size of 120 parts.
The company has supplied you with their standard QA Dept. SPC data form. You must calculate the necessary
information on the data table, and then use the data with guidelines for chart construction in order to construct
the two charts. From the previous sections instructions, proceed through step #6 and think about step #7.

Questions
1.) Do you think the initial data is showing a process that is in-control ? Why or why not? Are there any
special causes showing in the data?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
The initial partially-filled data table and blank chart sheet can be found on the next 2 pages (instructor versions
follow).

Page 73

4.0 Coordinate Measuring Machines and Inspection

Space saved for 4.4.3 Data Table worksheet from Excel file Prime Module 4.4

Page 74

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Space saved for Blank Ctrl Charts worksheet from Excel file Prime Module 4.4

Page 75

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.4.4 Application: Creating Revised Data Table plus X and R Control Charts
Materials: Calculator, pencil, straight edge, completed Application 4.4.3, and attached table / rough chart
After analysis of the initial charts, the X chart and the R chart were out-of control. The R chart was analyzed
first. The first data sample was beyond the UCL for range and sample 17 was right at the UCL. After
investigation, 2 things were found. Sample one was begun with a fresh tool in the holder and the operator
targeted about 0.001 above the lower spec limit knowing the part would grow over time. The run was started
on a cold machine at the beginning of the shift after having been down all night. Production works shift 1 and
2, but maintenance works three shifts. The third, graveyard shift is used for doing preventive maintenance to
the machines and some special projects that must be done when no production is going on. Although 2 samples
were actually below the lower spec limit (which has nothing to do with this SPC study), many other parts
produced during the first operating hour were found to be bad. The range for the first hour was unduly high. It
was found that a cold machine produced rather erratically during the first operating hour. After close study, it
was determined that it took about 45 minutes for the machine to steady during decent weather (this was the end
of May). It may take longer in winter, as the building is temperature-controlled only to a degree.
A decision was made to have maintenance turn on the machines 1 hour before the start of first shift in
order to warm up before producing. They would try 1-1/2 hour early during days below 40 degrees F. It was
also found that the operator made a tool change after the 14th sample group was measured. A few from the
sample exceeded the upper spec limit. Tool changes and/or offset adjustments will be noted on future
charts. Based on this parts production rate, it has been found that one can expect about 13-14 run hours from a
fresh tool. About .005 will wear from the surface in that time. It has been decided to limit wear in the future
to 12 hours for this finishing operation. The tooling insert can then be used in another roughing operation for an
additional amount of hours where dimension is not important. The operator is fully loaded with manual duties
at this work station (part inspection, machine load/unload and deburring), so it has been decided that making
offset adjustments should be limited and avoiding any machine downtime beyond the load/unload and necessary
tool changes. The machine has built-in timing mechanisms and software for tracking total cut time for any
fresh tool, and an alarm goes off to indicate when the time is up and tool change is necessary. There were no
problems statistically with the next group after the tool change, as the machine was warm. Group #18 was right
at the UCL on the R chart. This was again just after the beginning of the day shift, but with a partially worn
cutting tool. The large range should be corrected by the procedure mentioned above.
The out-of-control data from the R chart has been eliminated from the initial study. You must calculate the
revised information on the data table, and then use the data with guidelines for revised chart construction. From
the previous sections instructions, this is step #6 and #7.

Questions
1.) Is the revised data showing a process that is in-control ? Why or why not?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
2.) How do you explain the difference for the calculated UCL and LCL in the X chart?
__________________________________________________________________________________________
__________________________________________________________________________________________
The revised partially-filled data table and blank chart sheet can be found on the next 2 pages (instructor versions
follow).

Page 76

4.0 Coordinate Measuring Machines and Inspection

Space saved for Revised Data worksheet from Excel file Prime Module 4.4

Page 77

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Space saved for Blank Ctrl Charts worksheet from Excel file Prime Module 4.4

Page 78

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.4.5 Dialog
Chart Interpretation and Trend Analysis
An example of a pair of computer-generated charts that show a process which is in control can be seen below
in Figure 4.4g. No points exceed the control limits, the points are approximately normally distributed about the
centerline on the upper X chart, there is no evidence of trends or recurring cycles, and the points are quite
random over time (i.e. no runs of consecutive plotted points above or below the line).

Figure 4.4g: In-Control Process Chart Example


From http://www.adeptscience.co.uk/products/qands/analyst/appnotes/labmeth.htm

If just one subgroup average or range is outside the control limits, the process is deemed out of control. One
requires gaining technical knowledge of the process in order to investigate and make changes to the process.
The following are other trends of out of control situations to look for and beware of:

Too many points near the limits can indicate process problems about to happen (out of control
situation in near future). Keep in mind that each control limit is located 3 standard deviations from the
mean. If one was to draw another line 1/3 of the way toward the limits from the mean and yet another
2/3 of the way toward the limit, this would represent 1 standard deviation and 2 standard deviations
from the mean. From the earlier discussion of a normal distribution, one would expect that about 64%
of the points should lie within +/- 1 standard deviation and about 95% should lie within +/- 2 standard
deviations. The trend of too many near the limits could indicate changes in raw material, charting
more than one process or machine on a single chart, improper sampling techniques, or data
manipulation by the operator. Finding 8 or more points not in the zone from mean to one standard
deviation, on either side of the centerline, would indicate non-random behavior (generally for X chart
only).
Too many points near the centerline (from mean out to +/- 1 standard deviation) can indicate overcontrol (too many time-consuming and shutdown-causing operator interventions) or improper
sampling methods / manipulation of data (cheating). The probability of 15 or more points plotted in a
row in this zone on the

chart is almost zero. Operator manipulation or falsification of data,

Page 79

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

improper sampling techniques, or a decrease in process variability that wasnt accounted for when
establishing the original control limits could be the problem.
Trends of data up or down (especially 6 or more consecutive increases or decreases), or recurring
cycles could indicate a drift or cyclical change with respect to the mean or range. For the machining
operation of an outside diameter mentioned earlier, one would actually expect a trend steadily upward
in the part diameter on the X chart only due to tool wear. Eventually, one would have to make a tool
offset adjustment or tool change to correct (but the adjustment should be noted). In other types of
operations, it could mean a deterioration of a solution used in the process. If the trend is on the R chart
only, it could mean a change in homogeneity of raw material (up indicates worse, down indicates
better).

If cycles of data move up on some days and down on another on the X chart, it could be due to
ambient temperature or humidity reasons, or rotation of operators. Cycles on the R chart could be due
to maintenance schedules or operator fatigue.
If a sample average falls outside the limits, it is likely that all pieces for the time period are suspect. It
could indicate a change in incoming material, process, or other factor. Logs are good to keep to note
material or events.
If a sample range falls outside the limits, it indicates the uniformity of process has changed. Again, it
could be due to man, machine or materials.
If one entry of a sample subgroup falls outside the limits, generally it will have a cause system
different from the others in the subgroup. One must look for unique events which could have caused
this occurrence.
One must look for signs of non-random behavior, and these will take the forms of runs. They could
be due to changes in the machine, operator, material or set-up. Some signs are: (a) 2 of 3 successive
points at 2 standard deviations from the mean or beyond (generally applies to

chart only), (b) 4 of

5 successive points at 1 standard deviation from the mean or beyond (generally applies to X chart
only), (c) 8 successive points on one side of the center line (can apply to either chart type).
The odds of 14 or more consecutive points oscillating up and then down would indicate nonrandomness. There should be a cause for this pattern.

If any of these trends or signs of non-randomness occur in the initial chart, one must analyze the causes and
rerun the study.

Identifying Assignable Causes


It is recommended to analyze the R chart first. It is more sensitive to changes in uniformity. Bad parts
produced by the process will greatly affect the R chart. Ideally, one wants to find means to get points on the R
chart lower, which translates to process consistency and reducing or eliminating assignable causes of variation.
If the R chart is unstable or out of control, the

chart will be very misleading. When the R chart is stable and

under control, focus on the X chart. When actions taken cause both charts to exhibit randomness, lack of
trends, and within limits consistently, the process can be deemed under control and stable. That is the
primary goal. One wishes to continue monitoring to ensure that the process stays that way. One may also want
to run a larger sample and recalculate the means and control limits for a process which exhibits control over a
long period. This will reflect the present conditions as much as possible.
One final note is that one can also do control charts for individuals X. It is based on individual observations
rather than subgroup averages, and is useful when one can only obtain one observation per lot or batch of
material, and there may be long time lapses between those observations. We will not detail it here as it is not
that common. Reference other sources to find out more.

Page 80

Measurement
and Quality

4.0 Coordinate Measuring Machines and Inspection

Looking at the revised chart from Application 4.4.4, the UCL and LCL on the

chart narrowed slightly

because the eliminated special cause data reduced the value of R in the formula. That is a reduction in the
sample standard deviation value, and hence reduced +/- 3 slightly. In looking at the revised charts, and
following the trend information from above, a few things are evident in this process.

The R chart shows no subgroup averages approaching the UCL or LCL. The data is mostly in the 1 standard
deviation range from the means. This is not considered statistically normal. Also, as one might expect with
tool wear, data on the X chart trended upwards until a tool change or offset adjustment took place. One would
not see this in many other types of operations, but would see it in a machining operation, unless frequent offset
adjustments were being made. The X chart is still out of control, primarily for the first 4-5 hours after a tool
change and for the last 4 hours before a tool change.
In the actual case for part XYZ, the SPC study revealed some good information. Targeting only 0.001 above
the lower spec limit of 0.137 was not such a good idea. The decision was made to initially target 0.1395
above the lower control limit, and as the data showed, make an offset adjustment of 0.001 every 3 hours to
compensate for normal tool wear. The tool would still be changed out after 12 hours of run time, as decided
earlier. Machine 32 was quite new, and was well able to hold tight tolerances consistently, as shown by the R
chart. It was decided to run another study with the new policies in place. Although not shown here, the new
policies were effective.

Attribute Control Charts


When characteristics cannot be defined by engineering specs like dimensions, we get more into attributes. An
attribute might be Go or No-Go, good or bad, defective or not, pass vs. fail, or the like. This could apply to any
process, even the machining operation for which we wanted to do X and R charts on critical features. One
only needs a count of observations on a characteristic rather than the measurement itself, which was required for

X and R charts. Some examples where this might be used are: presence or not of surface flaws, cracks or not
in wire, paint defects or not on a surface, voids or not in material, flash or not on a molded plastic part, a
completed assembly that passes or fails a test station, etc. In many cases, one is making simply a visual
determination, as opposed to measuring an actual specified dimension.
In most cases, attribute data is easier and less costly to collect. One may use simpler Go / No-Go gages, and in
some cases, the data is already available from past inspection logs. Several types of defects can be grouped
together for 1 chart. The control charting is less complicated and more easily understood by others. Attribute
control charts can provide an overall view of the quality of a process. There are some disadvantages as well,
such as: not providing detailed data for individual characteristic analysis, there is no degree of defectiveness of
a part (very defective and slightly defective are treated the same), and the charts generally only say when a
change in process has occurred and will not usually indicate why. The 3 primary chart types are the p-chart, cchart and u-chart. We will go over each in the following sections.

p-Chart
This is an attribute control chart measuring the output of a process in terms of proportion defective. It works
well for machine-dominant processes as well as operator-dominant processes. The equation for p is given by:

p=d/n

Page 81

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Where d is the number of defective units (having one or more defective characteristics), or nonconforming, and
n is the subgroup or sample size. The proportion defective, or p, is expressed as a decimal number and
generally not as a percent (which is the decimal fraction multiplied by 100), although it could. The sample size
for each subgroup can vary, and may not be the entire lot quantity. One could look at all defectives regardless
of feature or number of defective qualities deeming it as defective, or one could look at just one quality and
calculate the fraction defective.
When looking at defective not defective probability distributions, the binomial distribution is used rather than
the previously used normal distribution. We will not go into detail on this distribution here, but the student is
encouraged to research the subject on his own if he wishes to learn more. An example of a p-chart with single
control limits is shown below in Figure 4.4h. A corresponding data table should accompany the chart, like
before, to display the raw data and perform the necessary calculations.

Figure 4.4h: p-Chart Example Single Control Limits


From http://www.texasoft.com/manual55.htm

Another example of a p-chart is shown below in Figure 4.4i. Unlike the one above, the control limits vary.
This is explained below.

Page 82

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.4i: p-Chart Example Variable Control Limits


From http://www.hanford.gov/safety/vpp/pchart.htm

The horizontal axis can be divided into hours of the day, days, weeks, lots, sample number, etc. The vertical
axis would represent the fraction or percent defective. Data for the initial chart, and establishment of upper and
lower control limits, can be from historical records. This type of chart is not as helpful in diagnosing causes
(like the X and R charts), but can help reduce fraction defectives by pinpointing when out of control events
happened. Circumstances can be investigated as to the conditions present when the event(s) occurred, or trends
can be seen. Perhaps when quantities are either very high or low, one may see that events occur.
The centerline (or average) of the data is referred to as p (p-bar), and is equal to all of the defectives of all of
the subgroups divided by the total units in all of the samples. The upper and lower control limits are calculated
by:
UCLp = p + 3
LCLp = p - 3

p (1 p ) / n
p (1 p ) / n

Because binomial distributions are often not symmetrical, the equation for the LCLp could yield a negative
number. If it does, use zero instead. Because the control limits of the p-chart depend on the subgroup sizes n
(from above formulas), certain adjustments must be made to ensure that the proper interpretation of the chart is
made. In Figure 4.4h above, there is one upper and one lower control limit value shown. This is easier to do
and calculate, but should only be done if the subgroup sizes do not vary by more than about 20%. The value for
n in the above two formulas would then be the average of the subgroup sizes in the sample data. One thing to

Page 83

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

note though is that if the actual subgroup size is less than the average subgroup size, a point showing above the
upper control limit may in fact not be above its own true upper control limit if calculated independently. One
should check it and note when researching results. The example showing variable control limits, Figure 4.4i,
has control limits calculated for each subgroup based on its actual subgroup size n. Although the control
limits will be correct, the chart looks messier and takes longer to calculate. It is also more difficult to explain to
workers or management.
It is recommended that n be greater than 50 if possible to be most useful. Another rule of thumb is that n p
should be greater than or equal to 4. This latter rule supports the premise that the better the quality, the larger
the subgroup needed to detect lack of control. If subgroup sizes are small, the control limits calculated will tend
to be wider and therefore show to be in control more often. When each point on the chart represents many
characteristics that lead to a defective or not, a process that appears to be in control may be due to balancing
the good and bad characteristics. If possible, it is recommended to plot separate charts for each characteristic in
order to get a better picture.
Cyclic up and down movements on this type of chart could indicate regular changes in suppliers. A steadily
increasing trend could indicate tool wear or tightening of requirements. A decreasing trend could indicate
improved skill or experience, or a relaxation of requirements. Plot points found below the lower control limit
could indicate a human error in accepting defectives as good, but there could be an assignable cause for the
unusually superior quality. When there is a high concentration of points near the mean, and away from the
control limits, it could indicate non-random sampling, samples coming from mixed sources, or sample
screening prior to inspection. A sudden shift in level could indicate a new operator or significant machine
change. Runs can also apply as discussed for the

X and R charts.

As with the X and R charts, after plotting the chart, analyze the data points for evidence of non-control, find
and correct the special causes, and then recalculate the control limits for future charts. It is an ongoing process
of improvement.
A slight variation of the p-chart is the np chart, which is a control chart for number of defectives (rather than
proportion). It is effective when one wishes to know the number of defects in a lot, for example. The subgroup
sizes should remain constant. In almost all ways, it follows the guidelines set forth for the p-chart above.
The mean for the chart is n p (n p-bar), which is the average number of defectives for the study period. Once
all the subgroups are listed in a table [one column for lot number, per se, another column for n for each
subgroup (should be the same), and a third column for np, or the actual defectives found], total all of the np
values and divide by the number of subgroups to get n p . The control limits are calculated as follows:
UCLnp = n p + 3
LCLnp = n p - 3

n p (1 p ) / n
n p (1 p ) / n

The p figure in the above two equations is arrived at by dividing n p by the value for n.

c-Chart
When a single unit of production can and does have multiple defects scattered throughout, or is capable of
having numerous defects, it may be useful to institute c-charts, or count of defects. This could be flaws in a roll

Page 84

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

of paper, bubbles in glass, or missing rivets in an aircraft assembly. Subgroups may be square yards of cloth, a
given area of sheet metal, number of glass containers, etc. One could analyze individual automobiles or other
complex assembly, where the numerous defects may come from many sources, and no one source produces the
majority of the defects or one type of defect is not prevalent. One could also chart bookkeeping errors or
accidents for non-industrial settings. The interest is not only in the number of defects, but in how many defects
it has.
The subgroup size can be one or more, but must be constant. Although we will not get into details, this type of
charting is based on the Poisson distribution. For the data table, one column would represent the sample
number (can be 1 or more units, but all samples must be the same size) and the other would be the total defects
found. The mean on the control chart, or c (c-bar), would be the overall average number of defects for all
samples. It is the total of the second column divided by the number of sample groups in the first column. The
upper and lower control limits are calculated by:
UCLc =

c +3

LCLc =

c +3

As with the X and R charts, after plotting the chart, analyze the data points for evidence of non-control, find
and correct the special causes, and then recalculate the control limits for future charts. It is an ongoing process
of improvement. An example of data and chart are shown below in Figure 4.4j.

Page 85

4.0 Coordinate Measuring Machines and Inspection

Plate
Number of
#
Nonconformities
1
1
2
0
3
4
4
3
5
1
6
2
7
5
8
0
9
2
10
1
11
1
12
0
13
8
14
0
15
2
16
1
17
3
18
5
19
4
20
6
21
3
22
1
23
0
24
2
25
4
Total
59
Average = 2.36

Measurement
and Quality

Figure 4.4j: c-Chart Data and Control Chart


From
http://deming.eng.clemson.edu/pub/tutorials/qct
ools/egt1.htm

Page 86

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

This particular process is out of control as one of the points lie outside the upper control limit. Once the special
cause for the out of control sample is identified, analyzed, and eliminated, the data should be recalculated and
re-charted. Trends presented under p-charts apply to c-charts. One could see a chart like in Figure 4.4i if two
changes in process were charted on the same chart, but only for the mean. The control limits would be straight
lines as well and connected where the significant change occurred.

u-Chart
This type of chart is applicable in the same situations as the c-chart, and the same criteria apply (Poisson
distribution, and sample size of 1 or more but the same amount in each sample). It focuses on the number of
non-conformities per unit. A data chart like in Figure 4.4j would exist, but a third column would be added
called Defects per Unit. Assume that the example had 20 additional defects per each sample and that the
number of units in each sample was 5. One would now have an idea what the table would look like.
For this type of chart, the central line of the chart is u (u-bar), or the total of the defects per unit for each
subgroup divided by the total subgroups. The upper and lower control limits are calculated as follows:

UCLu = u + 3

u/n

LCLu = u + 3

u/n

One could have variable control limits for each sample as in the Figure 4.4i p-chart, and then the sample sizes
could vary, but the chart is messier and harder to explain again.

Others
Although we will not go into any detail here, there are other types of charts as well, like cumulative sum
(CUMSUM) control charts. There are other types of special control charts as well, such as median, moving
average, moving range, geometric moving average, exponentially smoothed average, demerits, adaptive, and T2.
One should have an advanced understanding of probability and statistics to proceed to advanced charting
concepts.

Process Capability
A natural follow-up to instituting statistical process control (SPC) is to perform process capability studies.
Once assignable causes are eliminated from the process, common causes reduced, and the process is deemed to
be in statistical control, a capability study may be initiated. By definition, a capability study measures the
performance potential of process when no assignable causes are present. Keep in mind that there will always be
a degree of variability in any process, as discussed earlier. The capability is expressed as the proportion of the
process output that remains within product specifications. This is where we finally begin comparing to the
specification limits. A product is incapable when the capability limits fall outside the specification limits.
Some examples where capability studies are useful include:

Evaluating new equipment purchases


Predicting whether design tolerances can be met
Assigning equipment to production based on capability of the various pieces of equipment

Page 87

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Planning process control checks


Analyzing the interrelationships of sequential processes
Making adjustments during the manufacturing process
Setting specifications
Costing of contracts
Solving problems in a wide range of situations in administration, engineering and inspection

A process could very well be in a good state of statistical control and still not meet the specification limit
standards. The process may not be centered properly, and a simple setting adjustment is all that is needed. In
some cases, either the adjustment method for the process itself or the inspection instruments are not precise
enough to ensure the process can be capable. Use of simple Go / No-Go gauges may not be adequate. A
statistically stable process could still be producing parts with a relatively high degree of variability. If the
specification limits are relatively narrow, product may not meet the spec. There are instances where
specification limits have been set unduly tight, and it would cost more to move the process to a more capable
processing system or add operations. These studies could prompt a decision to be made to loosen the
specifications and suffer no reduction in product performance. Where the specifications are critical and
unchangeable, these studies will prompt process equipment, parameter or technique changes. Processes that
experience drifts (trends upward or downward), like in the case of natural tool wear, can have routines in place
for determining and making adjustments.
In many ways, the traditional approach of having production make product as quickly as possible, and then
having quality control people inspect and eliminate the product that does not meet specification, can be costly
and wasteful in terms of machine time, operator time, inspection time and material. Even 100% inspection
cannot guarantee catching all defective product.
When one is not in position to determine whether a process is in statistical control, like a suppliers process for
incoming material to your operations, performance studies can be conducted on your end. These can even be
useful for examining incoming lots of material, short-time production runs, or even one-time production runs.
For incoming material, the performance study cannot let us know if the suppliers process is in control, but it
may well indicate by the shape of the distribution what percent of parts are likely to be out-of-spec or whether
the distribution was truncated by the supplier by sorting out the obvious non-conforming parts.

Setting up the Capability Study


When beginning a capability study, one of the first steps is to select the critical dimension or variable to be
examined. It may be a critical dimension that must meet product specifications in order to make the product
function properly. It could just as well be a suspect product spec that is not being held historically. Next, data
must be collected. It is best to collect as much data as possible over an extended period of time. Data may
already be available from SPC exercises. The third step is to construct a histogram of the data in order to see
the distribution pattern. This can be a time-consuming process, but there are systems out there like DataMyte
which will automatically collect data from a measurement gauge and even create histograms. When collecting
data, be sure the measuring device is very precise, use best care, and measure at least one order of magnitude
finer than the specification (e.g. if the spec is +/- 0.00X, measure to within 0.000X).
Figure 4.4k below is an example of a histogram, and this one even shows the SPC control limits and spec limits.
By making a graphical representation of the data, one can see if the data appears to fit a normal distribution and
see where the process mean is with respect to the nominal specification. Some software will superimpose a bell
curve on the data.

Page 88

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.4k: Histogram


From http://www.cimsys.co.nz/Articles/SPCcanpay.html
If constructing a manual histogram, one must determine the measurement range, divide the X-axis range into
logical groupings (up to 20 if need be) of dimensions or other characteristic, total the number for each range
from the raw data, make sure the Y-axis on the histogram can handle the largest number of samples for the
maximum group size (spacing equally), and then label the histogram as desired.

Analyzing Results
Data from processes that are in statistical control will tend to form a normal distribution bell curve. If done
manually, one can approximate a curve over the histogram data, and add control limits / spec limits, and process
mean / nominal dimension. In many cases, one will readily see how capable the process is. The figure above
shows a process that is very much in statistical control, is well within the upper and lower control limits, and is
also well within the spec limits. The process mean is skewed just slightly to the right of the nominal dimension
(although not listed, note that the lower / upper control limits are skewed a little closer to the upper spec limit
than to the lower spec limit). If the process is not in statistical control, there is not much point in analyzing
capability further. Get the process under control first and then return to capability.
A process could exhibit signs of being non-normally distributed and still be in control statistically. There may
be a physical stop on a machine which will not permit below a certain size. In this case, there may be a sharp
end to the data on the lower side and a normal tail to the right-hand high side. Non-normal distributions could
be due to too low a sample size, a poor probability that the sample was actually representative of the process, or
other factors. Figure 4.4l below shows a few of these. A leptokurtic curve (A) has an extremely high center
peak and then has tails that are spread widely apart, a platykurtic curve (B) has a low peak and tails spread far
apart,a skewed distribution has one of the two tails extending much further than the other (C and D).
An exponential distribution is shown below in Figure 4.4m, and is often found in electronic parts testing. More
observations are found below the mean than above it. A process can exhibit multiple peaks, or a bimodal
distribution, as shown below in Figure 4.4n. This normally occurs when data is from two mixed distributions
with separate means. Two different machines may have produced the lot of goods, two operators may have
been involved, or two different suppliers / materials were involved. If the data could be separated, one may find
each to be normally distributed, but each with a different mean.

Page 89

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Figure 4.4l: Non-normal Curves


From http://www.upe.ac.za/psychology/psychons/data3.htm#2.)

Figure 4.4m: Exponential Distribution


From
http://www.itl.nist.gov/div898/handbook
/eda/section3/eda3667.htm

Figure 4.4n: Bimodal Distribution


From
http://www.cimms.ou.edu/~doswell/Nor
mals/bimodal.JPG

Page 90

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

Calculated Capability Indices


Beyond looking at histograms to graphically see if processes are capable, various capability indices exist and
can be calculated as long as there are existing specification limits (engineering tolerances). Each index that
will be covered takes the graphical histogram information and expresses it as a number that describes an aspect
of capability. While a number is easier to compare, it is somewhat limited in scope. Combining the two will
provide a complete picture.
In the following sections, (a) USL stands for upper spec limit, (b) LSL for lower spec limit, (c) MP for
midpoint of the spec limits or nominal dimension ( MP = [ (USL + LSL) / 2 ], (d) TOL is the tolerance or the
distance between the spec limits ( TOL = USL LSL ), (e) Mean is the process mean of the sample, and (f) is
one standard deviation of the sample {multiplied by 3 or 6}.

CP
CP (often shown as Cp) is the inherent capability of process, or the ability of a process to produce consistent
results. It is the ratio of tolerance to the +/- 3 standard deviations of the control limits, or the ratio of required to
actual variability. The formula is:

CP = TOL / 6
Values could range from near zero to very large positive numbers. As long as the process mean is equal to the
midpoint of the tolerance, or nominal spec, the following is true:
CP > 1.33 :
Process is capable
1.0 > CP > 1.33 : Process is capable, but should be monitored as CP nears 1.0
CP < 1.0 :
Process is not capable
If the process mean is not not equal to the nominal spec, CP values become less precise. Moving the mean
through an adjustment will make it more capable (see K below).

CR
CR is the capability ratio and is the inverse of CP, or:

CR = 6 / TOL
Values of less than 0.75 indicate capability, except in the instances again where the process mean does not equal
the nominal spec.

K
K is the comparison of process mean and nominal spec, and thus tells how centered the actual data is within the
specification limits. The formula is:

K = ( Mean MP ) / ( TOL / 2 )
If K is a positive number, the process mean is above the nominal, and if negative, it is below the nominal. The
best value is zero, where the mean and nominal are the same. A value of either 1.0 or -1.0 for K means that the

Page 91

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

process mean is at one or the other of the spec limits, so that 50% of the parts will not meet spec (must be
scrapped, reworked, or accepted on a written engineering deviation report). At values greater than 1.0 or less
than -1.0, over 50% of the parts will not meet spec. The value of K does not relate directly to capability since K
may equal zero, but the process may not be capable (if CP < 1.0).
CPK
CPK (often shown as Cpk) shows the capability of a process based on the worst-case view of the data, and takes
into account its off-centeredness. The formula is:

CPK = The lesser of ( USL Mean ) / 3

or

( Mean LSL ) / 3

The first part is often known as CPU (CP upper) and the second part CPL (CP lower). One must calculate both
figures before comparing the two. If CPU was less than CPL, we would know we were off-centered toward the
upper spec limit. A negative value indicates the mean is outside one of the spec limits. Zero indicates the
process mean is equal to one of the spec limits. A value between zero and 1.0 indicates that part of the 6
process range falls outside of one of the spec limits. A value of 1.0 indicates that one end of the 6 process
range falls on one of the spec limits. If the lesser of the 2 values is greater than 1.0, then the 6 process range
falls entirely within the spec limits. The ideal lesser value is 1.33 or better. CPK is probably the most useful
indicator, as it formulates capability in a manner that compensates for shifts in the process mean from the
nominal spec value. If the two values for CPK are equal, or the lesser value for CPK is equal to the value for
CP, the process is centered on the nominal spec.

A Final Note
The use of control charts, whether variables control charts or control charts for attributes, if followed
procedurally, will go a long way to help one bring almost any operation under a state of statistical control. By
an ongoing process of finding and eliminating unusual circumstances that lead to special cause variations, and
looking for ways to reduce the extremities of common cause variations, one will increase quality, eliminate
waste, and reduce costs. When followed by process capability studies, one will learn much about how the
actual process relates to engineering requirements, and assist in making changes that could further reduce costs
or assist in the decision making process.
A further study of probability and statistics will allow one to even determine the expected reject rates and
associated costs extending the techniques shown in this module. We did not address that subject in this module,
as one must more thoroughly understand Z values for a normal distribution and the associated theories. There
are numerous sources out there that will explain how to do this type of determination. It is hoped that the
student will take away a practical understanding of the concepts, know the basic techniques and formulas that
are used, have an understanding of how to analyze the data, know a few examples of possible corrections to
look for, and be able to put them into practical use with the material provided.

Page 92

4.0 Coordinate Measuring Machines and Inspection

Measurement
and Quality

4.4.6 Application: Using Process Capability


Materials: Calculator, pencil, straight edge, completed Application 4.4.4, paper and attached rough chart
The Acme Company performed a process capability study on par XYZ at the time the revised SPC data was
processed. Recall that the main diameter spec was 0.140 +/- 0.003. Using the Revised SPC data from
Application 4.4.4 (110 measurements), (a) create a histogram (include LSL, LCL, process mean, spec mean,
UCL and USL) sorted data and desired groupings are provided for your convenience, and (b) calculate CP, K
and CPK for the part on machine 32.

Questions
1.) What did you conclude about the basic process capability based on the histogram and calculated indices?
How centered is the current process?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
2.) According to the histogram, if you drew a rough bell curve following the histogram data, was the shape
approximately normally distributed?
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________
__________________________________________________________________________________________

Page 93

Potrebbero piacerti anche