Sei sulla pagina 1di 9

Attribute Measurement System Analysis

Timesheet Processing
Cost Data Integrity Project

Purpose
Need to make sure that our operating definitions are rock solid and no variation gets introduced based on how we are
measuring things. For example, if two different people measure the same thing and they come up with two different answers,
then we have some degree of variation introduced through bias and we want to measure only real variation. So it is
important to evaluate or gage two things:

1. Repeatability - If the same person measures the same thing, do they still get the same answer?
2. Reproducibility - If someone else measures the same thing, do they get the same answer the first person got?

Note: You may see this referred to as "Gage R & R"

Step 1 - Select at least two observers or appraisers to participate in the Measurement System Analysis:

Name of Observers / Appraisers 1.6 In order to measure repeatability, each person should
1.1 Sherry Minzer make at least two trial runs. Indicate the number of
1.2 Robert Bowers trail runs each person will take: 2
1.3 Jamin Musului
1.4
1.5

Step 2 - Establish an acceptable criteria regarding if the measurement passes of fails:

2.1 Percent that must be in Agreement > 95%


2.2 Confidence Level in Results > 95%
2.3 Indicate the number of measurements to be taken > 20

Step 3 - Select the units to be measured, have each observer measure the unit in random order, repeat the process
according to the number of trial runs, and tabulate the results in the table below:

Observ Sherry Minzer Robert Bowers


No. Trial 1 Trial 2 Trial 3 Trial 4 Trial 1 Trial 2 Trial 3 Trial 4
1 6 6 6 6
2 8 8 8 8
3 11 11 11 11
4 7 7 7 7
5 10 10 10 10
6 4 4 4 4
7 6 6 6 6
8 8 8 8 8
9 7 7 6 6
10 9 9 9 9
11 11 11 11 11
12 8 8 8 8
13 7 7 7 7
14 10 10 10 10
15 6 6 6 6
16 9 9 9 9
17 7 7 7 7
18 8 8 8 8
19 6 6 6 6
20 10 10 10 10
21
22
23
24
25
26
27
28
29
30

Step 4 - Determine the Degree of Agreement in the Data

Observations per Each Trial in Agreement by Observer Observations in Agreement Betwee


Observ Observ Observ Observ Observ Observ Observ Observ 1.1 vs:
No. 1.1 1.2 1.3 1.4 1.5 No. 1.2
1 1 1 1 1 1
2 1 1 1 2 1
3 1 1 1 3 1
4 1 1 1 4 1
5 1 1 1 5 1
6 1 1 1 6 1
7 1 1 1 7 1
8 1 1 1 8 1
9 1 1 1 9 0
10 1 1 1 10 1
11 1 1 1 11 1
12 1 1 1 12 1
13 1 1 1 13 1
14 1 1 1 14 1
15 1 1 0 15 1
16 1 1 1 16 1
17 1 1 1 17 1
18 1 1 1 18 1
19 1 1 1 19 1
20 1 1 1 20 1
21 21
22 22
23 23
24 24
25 25
26 26
27 27
28 28
29 29
30 30
Total Agreements 20 20 19 19
d on how we are
h two different answers,
riation. So it is

wer the first person got?

, each person should


cate the number of

er, repeat the process

Jamin Musului 0 0
Trial 1 Trial 2 Trial 3 Trial 4 Trial 1 Trial 2 Trial 3 Trial 4 Trial 1 Trial 2
6 6
8 8
11 11
7 7
10 10
4 4
6 6
8 8
7 7
9 9
11 11
8 8
7 7
10 10
5 6
9 9
7 7
8 8
6 6
10 10

ons in Agreement Between Observers (May have to revise the tabulation below to fit the tabluation to the left)
Observ 1.1 vs: Observ 1.2 vs: Observ 1.3 vs:
1.3 1.1 1.3 1.1 1.2
1 Col J 1 Col K Col M
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 0
1 1
1 1
1 1
1 1
1 1
0 0
1 1
1 1
1 1
1 1
1 1
19 0 18 0 0
Trial 3 Trial 4
Attribute Measurement System Analysis
Timesheet Processing
Cost Data Integrity Project

Repeatability Results
Were each of the observers able to repeat within our target of > 95%
Number in Meets
Observer / Appraiser No of Obs Agreement Percent Target?
Sherry Minzer 20 20 100% Yes
Robert Bowers 20 20 100% Yes
Jamin Musului 20 19 95% Yes
0 20 0% No
0 20 0% No

Reproducibility Results
How well did each observer do in reproducing the same results as the other observers?
Number in Meets
Observer / Appraiser No of Obs Agreement Percent Target?
Observ 1.1 vs: 1.2 20 19 95% Yes
Observ 1.1 vs: 1.3 20 19 95% Yes
Observ 1.2 vs: 1.3 20 18 90% No
20 0% No
20 0% No

Potrebbero piacerti anche