Sei sulla pagina 1di 68

TECHNICAL MANUAL

for the



POSITION ANALYSIS
QUESTIONNAIRE
(PAQ)



Ernest J. McCormick, Ph.D.

Robert C. Mecham, Ph.D.

P.R. Jeanneret, Ph.D.


































3
rd
Edition
1989, 1998, 2001
PAQ Services, Inc.

All rights reserved.
Published by PAQ Services, Inc.,
1911 C Street
Bellingham, WA 98225.

The Position Analysis Questionnaire (PAQ) is copyrighted by the Purdue Research Foundation,
West Lafayette, IN 47907.

PAQ

is a registered trademark of the Purdue Research Foundation.


CONTENTS

FOREWORD

1

THE POSITION ANALYSIS QUESTIONNAIRE (PAQ)

2
The Job Elements of the PAQ 2
The Organization of the PAQ 2
Response Scales for Use with PAQ Job Elements 2

SCORING OF THE PAQ

3

DEVELOPMENT OF THE POSITION ANALYSIS QUESTIONNAIRE (PAQ)

4

RELIABILITY OF JOB ANALYSES WITH THE PAQ

5
Reliability of PAQ Analyses for Individual Jobs 5
Influences on Reliability 10

PAQ JOB DIMENSIONS

14
Sample of 2200 Jobs 14
Factor Analyses 14
Job Dimension Scores 16
Reliability of Job Dimension Scores 17

DERIVING PERSONNEL REQUIREMENTS OF JOBS WITH THE PAQ

20
Discussion of Test Validity 20
Methods of Test Validation in Personnel Selection 20
Traditional Methods of Determining Test Validity 20
Job Component Validity 20
Validity Generalization 21
Job Component Validity Based on the PAQ 21
Job Component Validity Analyses with GATB Tests 22
Job Component Validity with Commercially Available Tests 25
Identification of Commercial Tests that Measure GATB Constructs 25
Identification of Commercial Tests That Measure Personality Variables 28
Comparative Effectiveness of Situationally Specific, Job Component, and Generalized Validation
Methods 29
PAQ Computer Outputs of Job Component Validity Data 31
Attribute Ratings of PAQ Job Elements 33

USE OF THE PAQ IN JOB EVALUATION AND IN SETTING COMPENSATION RATES

38
Criterion for Determining Job Values 38
The Initial Studies of PAQ-based Job Evaluation 38
Organization Specific PAQ-based Job Evaluation Studies 39
Insurance Company 39
Utility Companies 39
Public Sector 40
The Comparable Worth Issue 40



PREDICTION OF EXEMPT STATUS UNDER THE U.S. FAIR LABOR STANDARDS ACT 45

DEVELOPMENT OF JOB FAMILIES WITH THE PAQ 46

THE PAQ IN PERFORMANCE APPRAISAL 47

THE PAQ AND JOB PRESTIGE 48

COMPARATIVE EVALUATION OF VARIOUS JOB ANALYSIS METHODS 49

SUMMARY 51

REFERENCES 52

APPENDIX 57

INDEX 60

LIST OF TABLES


TABLE 1 - Frequency Distributions of Reliability Coefficients for "Pairs" of PAQ Analyses

6

TABLE 2 - Job Dimensions Based on Principal Components Analyses of PAQ Data For 2200 Jobs

15

TABLE 3 - Illustration of the Basis for Deriving Job Dimension Scores from PAQ Ratings on Four Job
Elements for Three Hypothetical Jobs 17

TABLE 4 - High, Mid-range, and Low Dimension Reliability Coefficients and Corresponding Standard
Errors of Measurement from 43 Studies 18

TABLE 5 - Multiple Correlations of PAQ Overall Job Dimension Scores with GATB Test Criteria and
Percentage of Agreement with Test Inclusion in a Specific Aptitude Test Battery (SATB) 24

TABLE 6 - Correlations Between Predicted and Actual Criteria for Five Constructs as Measured by
Various Commercial Tests 27

TABLE 7 - Multiple Correlations Between PAQ Overall Dimensions and Median Occupational Scores
on the Wonderlic Personnel Test 28

TABLE 8 - Multiple Correlations Between PAQ Overall Dimension Scores and the Percentage of
Persons in Occupations by Personality Indices 29

TABLE 9 - Correlation of Estimated Values with Values in Holdout Samples by Validation Method
Used to Make Estimates 30

TABLE 10 - Correlations Between Percentiles of Ratings of Selected Attribute Requirements of
Occupations and of Test Data on Certain Attributes of Incumbents on Similar Occupations 35

TABLE 11 - Correlations Between Percentiles of Selected Attribute Requirements and of Percentages of
Incumbents in Similar Occupations with High Scores on Specified Indices of the Myers-Briggs Type
Indicator (MBTI) 37

TABLE 12 - Categorization of PAQ Job Dimensions by Skill, Effort, Responsibility, and Working
Conditions 42

TABLE 13 - Evaluation of Seven Job Analysis Methods by Experienced Job Analysts 50

TABLE 14 - Conversion of Scores of Tests Used in Study to Standard Scores 58

LIST OF FIGURES

FIGURE 1 - Inter-Analyst Reliability Coefficients 8

FIGURE 2 - Standard Error of Measurement

10

FIGURE 3 - Reliability and Zero Responses

11

1
FOREWORD


This manual presents an overview of the Position Analysis Questionnaire (PAQ), some background
information regarding its development, a summary of some of the research carried out with it, and a
discussion of certain of its potential applications. Other available materials include the Position Analysis
Questionnaire and Answer Sheet, the PAQ Job Analysis Manual, the PAQ Users Manual, the Pre-
Interview Job Description Form, the PAQ Interview Guide, the PAQ Workbook, and manuals for PAQ
related software, including the On-line Users Manual, and the Enter-Act Users Manual.

2
THE POSITION ANALYSIS QUESTIONNAIRE (PAQ)


The Position Analysis Questionnaire (PAQ) is a structured job analysis instrument that consists of 187 job
elements (items) of a generic nature that provide for analyzing jobs in terms of work activities and work-
situation variables. (There are also eight additional items that deal with compensation.)

The Job Elements of the PAQ

The job elements are of a worker-oriented nature in that they characterize, or strongly imply, the
generic human behaviors that are involved in jobs, as contrasted with elements of a job-oriented or
task-oriented nature that deal more with the technological processes of jobs or with the specific
objectives or results of work (McCormick, 1959). The nature of the job elements of the PAQ makes it
possible for virtually any type of position or job to be analyzed with it. At the same time, the PAQ is not
intended to serve as a substitute for all other job analysis methods nor to meet all of the purposes they
serve. (For example, it cannot replace a job description in characterizing the tasks, technical processes, or
operations that are performed by job incumbents, nor can it specify the role or operational objective of
the job in the organization.) Consequently, the PAQ is often used in conjunction with other techniques
when a job analysis study is performed.

The Organization of the PAQ

The basic organization of the job elements in the PAQ is predicated on a worker-job interaction frame of
reference, with elements being separated into divisions representing various types of such interaction.
Elements in the first division (Information Input) are concerned with where and how workers obtain the
information to perform their jobs; elements in the second division (Mental Processes) describe the mental
activities required to perform jobs; and elements in the third division (Work Output) document the
various types of responses or actions involved in jobs. The other divisions are: Relationships with Other
Persons, Job Context, and Other Job Characteristics. An example of a job element in each of the six
divisions is given below:

Division of PAQ

Example of Job Element
1.
2.
3.
4.
5.
6.
Information Input
Mental Processes
Work Output
Relationships with Other Persons
Job Context
Other Job Characteristics
1.
42.
65.
103.
136.
165.
Use of Written Materials
Coding/Decoding
Use of Keyboard Devices
Interviewing
Working in High Temperature
Irregular Hours


Response Scales for Use with PAQ Job Elements

When analyzing a job with the PAQ the analyst makes a response of the relevance of each element to the
job using one of six different types of response scales such as the Importance of the element to the job.
Scales typically involve eleven scale points (consisting of six whole number points from 0 to 5 and five
intermediate or mid-points). Several elements use a 0 1 (Does Not Apply Does Apply) scale.

3
SCORING OF THE PAQ


When the analysis of a job with the PAQ is complete, a response will have been made for each of the job
elements discussed above. These responses usually are entered directly into the Enter-Act program on the
Internet. Although some computer outputs of PAQ data are expressed in terms of these individual element
responses, for most purposes the computer outputs are based on scores on various job dimensions.
These job dimensions are actually statistically derived factors (combinations of several elements into a
single score), and can be thought of as reflecting the basic structure of human work as measured with the
PAQ job elements. These job dimensions are presented and discussed in a later section of this manual.

4
DEVELOPMENT OF THE POSITION ANALYSIS QUESTIONNAIRE (PAQ)


The current form of the Position Analysis Questionnaire (PAQ), Form C (1989), is the result of an
evolutionary process covering several decades, in which the primary intent was to develop a structured
job analysis questionnaire that would generally apply across the spectrum of jobs. The ancestors of the
present PAQ (Form C) include the following: (1) the Checklist of Work Activities by McCormick and
Palmer (Palmer, 1958); (2) the Worker Activity Profile (McCormick, Gordon, Cunningham, & Peters,
1962); and (3) the Position Analysis Questionnaire (Forms A and B) developed by McCormick,
Jeanneret, and Mecham (1967, 1969). A summary of the research based on Form A of the PAQ is
reported by McCormick, Jeanneret, and Mecham (1972). Forms B and C of the PAQ are substantially the
same in their basic nature, content and format as Form A. Further information on the development of the
PAQ may be found in McCormick (1979) and McCormick & Jeanneret (1988).
5
RELIABILITY OF JOB ANALYSES WITH THE PAQ


Reliability generally refers to the stability or consistency of some measurement. The basic concept of
reliability as related to the actual analyses of jobs with the PAQ is concerned with the consistency of the
responses to the PAQ job elements as made by those individuals analyzing the jobs. (The reliability of job
dimension scores is discussed later.)

The measurement of reliability of PAQ job analyses typically is based on two or more sets of analyses.
(Although it is possible to derive measures of reliability for three or more sets of responses, the discussion
here applies to pairs of responses, usually measured with a correlation.) These sets can be of either of two
types. One of these consists of a comparison of the responses on the job elements made by two or more
individuals herein referred to as analysts; this is inter-analyst reliability. The other consists of a
comparison of responses by the same analyst at two or more different times; this is called rate-rerate
reliability.

In the case of either of these types of reliability, however, it is possible to compute the reliability on either
of two bases, namely for individual job elements (across a sample of jobs), or for individual jobs (across
all job elements). The following discussion deals with the reliability for individual jobs across all job
elements.


Reliability of PAQ Analyses for Individual Jobs

This basis for the reliability calculations is the two sets of responses independently made by two analysts
for the same job across all job elements. In such an approach there usually would be a correlation for each
job; the average of the correlations for several or many jobs usually would be accepted as an index of the
reliability of the analyses.

An illustration of such an approach was reported by Taylor and Colbert (1978) who arranged for the
independent PAQ analyses by different individuals (typically incumbents) of 325 jobs in the
administrative offices of an insurance company in various parts of the country. The frequency distribution
of the responses for pairs of analysts is given in Table 1. The average reliability coefficient was .68.

In addition, some jobs were analyzed by the same individuals 90 days after the initial analysis. A total of
427 pairs of responses resulted from this procedure, each pair consisting of the rate-rerate analysis by
the same individuals. The frequency distribution of these reliability coefficients is also given in Table 1.
The average of these reliability coefficients is .78.
6
TABLE 1


Frequency Distributions of Reliability Coefficients
for Pairs of PAQ Analyses


Class Intervals of
Reliability Coefficients
Pairs of Analyses
By Different Analysts

Pairs of Analyses
By the Same Analyst
Number % of Total Number % of Total

.90 to 1.00 7 .6 8 1.9
.80 to .89 103 8.7 159 37.2
.70 to .79 411 34.5 176 41.2
.60 to .69 400 34.4 66 15.5
.50 to .59 198 16.6 16 3.7
.40 to .49 52 4.4 2 .5
.30 to .39 10 .8

TOTAL 1190 100.0 427 100.0


Average Reliability Coefficient .68 .78
Source: Taylor and Colbert (1978a)






7
Reliability studies have been conducted on a routine basis with the PAQ and are recommended as part of
the PAQ job analysis procedure. Reliability studies (Mecham, 1989a) from 35 organizations were
combined into a sample of 1116 jobs involving 3156 pairs of analysts (some jobs had been analyzed by
more than 2 analysts). The raw (solid line) and cumulative (dotted line) frequencies of reliability
coefficients at different levels are shown in Figure 1.



8
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
FIGURE 1

Inter-Analyst Reliability Coefficients
(N = 1116 Jobs. 3156 Analyst Pairs)
































P
E
R
C
E
N
T
A
G
E


R
A
W

10.0


9.0

8.0

7.0

6.0

5.0

4.0

3.0

2.0

1.0

0.0

100.0


90.0

80.0

70.0

60.0

50.0

40.0

30.0

20.0

10.0

0.0
P
E
R
C
E
N
T
A
G
E


C
U
M
.

RELIABILITY COEFFICIENTS
9
As part of this same study, the standard error of measurement (Nunnally, 1978, pg. 241) for each of the
3156 analyst pairs was also computed. (In general, this statistic shows how much the responses to the
items differ between the analysts in each pair. For example, a standard error of measurement of .5 means
that 68% of the responses were within of a scale point of each other.) The results of this analysis are
presented in Figure 2. (The solid line is the raw percentage, the dotted line is the cumulative percentage.)



10
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6
STANDARD ERROR
FIGURE 2

Standard Error of Measurement
(N = 1116 Jobs. 3156 Analyst Pairs)































Influences on Reliability

In interpreting PAQ reliability coefficients, one should be aware of certain statistical properties of such
indices, such as the effect of the number of 0 (Does Not Apply) responses of job elements. In this regard
Harvey and Hayes (1986) point out that in the case of the analyses of jobs with many 0 responses, inflated
coefficients can result. To determine the relationship between the size of reliability coefficients and the
number of elements which received 0 responses in a large sample, the number of elements which received
average responses of 0 for each job/position for the Mecham (1989a) study was correlated with the
reliability coefficients for paired analysts for that job, across all jobs in the sample. A moderate
correlation (r = .45) was found as reported in Figure 3, indicating that in general, reliability coefficients
are higher with increasing numbers of 0 responses. [The standard error of measurement was also found to
be related (correlation = -.59) to the number of zero responses. Additionally, the reliability coefficients
and standard error of measurement was found to correlate -.91.] While the inclusion of 0 responses in
P
E
R
C
E
N
T
A
G
E


R
A
W

10.0


9.0

8.0

7.0

6.0

5.0

4.0

3.0

2.0

1.0

0.0
100.0


90.0

80.0

70.0

60.0

50.0

40.0

30.0

20.0

10.0

0.0
P
E
R
C
E
N
T
A
G
E


C
U
M
.

11

1.0





0.8



0.6



0.4



0.2



0.0
reliability analyses apparently tends to inflate estimates of reliability, (perhaps because (1) the presence or
absence of a work behavior can be more reliably rated than its level when present, (2) the most used PAQ
items are better constructed, or (3) the response pattern alters the item variability present) it should be
noted that analyst agreement that an element does not apply to a job also is an important and valid
measure.

FIGURE 3

Reliability and Zero Responses

(N = 1116 Jobs)































R
E
L
I
A
B
I
L
I
T
Y

C
O
E
F
F
I
C
I
E
N
T
S

ITEMS WITH 0 RESPONSES
0.0 25.0 50.0 75.0 100.0 125.0 150.0
12
From a more general perspective, the size of reliability coefficients is influenced by response variability.
Such variability can result not only from errors of measurement, but also from actual (true) differences
in the nature of the jobs analyzed. Such differences in response variability are very likely to occur when
jobs of different types (e.g., clerical vs. shop jobs) are analyzed with the PAQ. For example, in the study
done by Mecham (1989a), the number of PAQ items given a response of 0 ranged from about 15 to 155
(of 194 possible responses), with 72 being about average. This variance occurs primarily because the
PAQ is a standardized questionnaire designed for use on jobs of all types. Because many jobs focus on a
13
few activities, it is not surprising that many items will be given responses of 0 (Does Not Apply) while
other jobs, being more general in nature, receive non-zero responses on many items. Differences in
reliability may therefore be expected because of real differences in the nature of the jobs in question as
well as errors of measurement. This can be seen by noting that the reliability coefficient is the ratio of
true score variance to observed score (true + error) variance as shown below:




2

t

r
xx
=

2

t
+
2

e



Where:
r
xx

= reliability coefficient
2

t

= true score variance
2

e

= error score variance




As can be seen, the obtained reliability coefficient can be altered by changing the true score variances
as well as the error score variance.

For this reason, the Standard Error of Measurement (SEM) which considers both the reliability and the
variability of the responses to determine the degree of agreement, (Nunnally, 1978, pg. 241), has been
made part of the standard reliability report as described in the PAQ Users Manual. The SEM may be a
better indicator of analyst agreement, especially when jobs having about the same number of zero
responses are compared.

14
PAQ JOB DIMENSIONS


During the development and experimental use of the PAQ, several factor analyses were carried out with
different samples of jobs that had been analyzed with the PAQ. The statistically derived factors from
these analyses are usually called job dimensions. The analysis reported here was carried out with a sample
of 2200 jobs. The resulting job dimensions are those that are used with the current computer analysis
package .

Sample of 2200 Jobs

The 2200 jobs used in the analysis represent a reasonably representative sample of jobs in the U.S. labor
force in terms of the proportions of employed persons in various major occupational groups of the 1970
U.S. Census. [A comparison of the item responses weighted by the numbers of persons employed in the
1970 and 1980 censuses indicated little change in the average level of work behaviors between the two
decades. Additionally, in a confirmatory factor analysis (Jeanneret, 1987) of PAQ data organized by the
1980 detailed census codes, very similar factors emerged. These two studies led to the conclusion that the
little that was to be gained by making the minor changes identified would not offset the confusion that
would result from changing the dimensions. Hence, factor analyses based on the 2200 jobs as described
below was retained as the basis for the dimensions used in the current system.]

Factor Analyses

Eight factor analyses (technically principal components analyses) were carried out with the data from the
2200 jobs. First, a separate principal components analysis was carried out with the data on the job
elements (items) within each of the first five divisions of the PAQ. Two analyses were also performed on
division six items, one on the dichotomous (Does Not Apply/Does Apply) items, and one on the
remaining items. The resulting factors were called Divisional job dimensions. Finally, data for most of
the PAQ items were pooled together for another principal components analysis. The resulting factors are
called Overall job dimensions.

The title of each job dimension was based on what was considered to be the predominant construct or
type of behavior represented by the job elements that dominate the dimension (specifically those that had
the highest statistical loadings on the dimension). It should be recognized, however, that some such
titles cannot be completely descriptive of the predominant construct. The titles of the job dimensions are
given in Table 2. This table includes the original title for each dimension. These are listed as Technical
Titles. The table also includes an Operational Title for each dimension. In the case of dimensions for
which these two titles are different, the Operational Title is intended to be somewhat simplified. A brief
description of each dimension is given in the PAQ Users Manual, along with examples of several jobs
with different scores on the dimension to illustrate the various levels of each dimension.

15
TABLE 2

Job Dimensions Based on Principal Components
Analyses of PAQ Data for 2200 Jobs

# Technical Title Operational Title

DIVISION DIMENSIONS

Division 1: Information Input


1. Perceptual interpretation
2. Input from representational sources
3. Visual input from devices/materials
4. Evaluating/judging sensory input
5. Environmental awareness
6. Use of various senses
Interpreting what is sensed
Using various sources of information
Watching devices/materials for information
Evaluating/judging what is sensed
Being aware of environmental conditions
Using various senses
Division 2: Mental Processes


7. Decision making
8. Information processing

Making decisions
Processing information
Division 3: Work Output


9. Using machines/tools/equipment
10. General body vs. sedentary activities
11. Control and related physical coordination
12. Skilled/technical activities
13. Controlled manual/related activities
14. Use of miscellaneous equipment/devices
15. Handling/manipulating/related activities
16. Physical coordination
Using machines/tools/equipment
Performing activities requiring general body movements
Controlling machines/processes
Performing skilled/technical activities
Performing controlled manual/related activities
Using miscellaneous equipment/devices
Performing handling/related manual activities
General physical coordination

Division 4: Relationships With Other Persons


17. Interchange of judgmental/related information
18. General personal contact
19. Supervisory/coordination/related activities
20. Job-related communications
21. Public/related personal contacts
Communicating judgments/related information
Engaging in general personal contacts
Performing supervisory/coordination/related activities
Exchanging job-related information
Public/related personal contacts

Division 5: Job Context


22. Potentially stressful/unpleasant environment
23. Personally demanding situations
24. Potentially hazardous job situations
Being in a stressful/unpleasant environment
Engaging in personally demanding situations
Being in hazardous job situations

16
TABLE 2 (continued)

Division 6: Other Job Characteristics


25. Non-typical vs. typical day work schedule
26. Businesslike situations
27. Optional vs. specified apparel
28. Variable vs. salary compensation
29. Regular vs. irregular work schedule
30. Job demanding responsibilities
31. Structured vs. unstructured job activities
32. Vigilant/discriminating work activities
Working non-typical vs. day schedule
Working in businesslike situations
Wearing optional vs. specified apparel
Being paid on a variable vs. salary basis
Working on a regular vs. irregular schedule
Working under job-demanding circumstances
Performing structured vs. unstructured work
Being alert to changing conditions

OVERALL DIMENSIONS

33. Decision/communication/general responsibilities
34. Machine/equipment operation
35. Clerical/related activities
36. Technical/related activities
37. Service/related activities
38. Regular day schedule vs. other work schedules
39. Routine/repetitive work activities
40. Environmental awareness
41. General physical activities
42. Supervising/coordinating other personnel
43. Public/customer/related contact activities
44. Unpleasant/hazardous/demanding environment
45. Non-typical schedule/optional apparel style
Having decision, communicating, and general responsibilities
Operating machines/equipment
Performing clerical/related activities
Performing technical/related activities
Performing service/related activities
Working regular day vs. other work schedules
Performing routine/repetitive activities
Being aware of work environment
Engaging in physical activities
Supervising/coordinating other personnel
Public/customer/related contacts
Working in an unpleasant/hazardous/demanding environment
Having a non-typical schedule/optional apparel style


Source: Mecham (February 1977)

Job Dimension Scores

The score for a job on a particular job dimension is derived with an equation that is based on the sum of
the cross-products of standardized responses for the jobs on the individual job elements multiplied by
the weights for the job elements. The weight for any given job element is a value that reflects the
statistically-derived importance of the element (i.e., its statistical contribution) to the dimension. The
generic form of the equation to calculate a job dimension score follows:
17


Job Dimension Score = (w
e-1
x r
e-1
) + (w
e-2
x r
e-2
) + (w
e-n
x r
e-n
)

In this equation: w = weight of a job element; r = response to a job element; e-1, e-2, . . and e-n refer to
elements 1, 2, . . . and n.



Simplified, hypothetical examples of the method of deriving job dimension scores are presented in Table
3 for three jobs (A, B, and C) using four job elements.


TABLE 3

Illustration of the Basis for Deriving Job Dimension Scores from PAQ Ratings on Four Job
Elements for Three Hypothetical Jobs (A, B, and C).


JOB JOB ELEMENT JOB
DIMENSION
SCORE
1

2 3 4

A
B
C
w
7
7
7
r
5
3
1
wxr
35
21
7
w
2
2
2
r
1
2
5
wxr
2
4
10
w
9
9
9
r
4
2
0
wxr
36
18
0
w
1
1
1
r
2
3
2
wxr
2
3
2

75
46
19

Legend: w = weight
r = rating on job element
wxr = cross-product of w and r


Note: The job dimension score is the sum of the cross products (wxr) of the four job elements.


Reliability of Job Dimension Scores

The reliability of job dimension scores is a measure of the consistency of the scores on a given job
dimension resulting from two or more PAQ analyses of the jobs in a sample of jobs. There are two
primary ways for deriving estimates of such reliability. The first, inter-analyst reliability, would apply
when there are responses for the same jobs as analyzed by different analysts. The second type of
reliability, rate-rerate reliability, would apply when the same analyst has analyzed the same sample of
jobs on two or more separate occasions. Different statistical indices of reliability can be calculated, but
the most common index is the correlation between scores on a dimension derived from pairs of analyses
performed across a sample of jobs.
18
When interpreting such reliability data, certain considerations should be kept in mind. In the first place,
inter-analyst reliability would be expected to be somewhat lower than rate-rerate reliability (analyses by
the same analyst at different times). Further, the possible restriction of range of the job dimension scores
for a given sample of jobs results in a lower coefficient than would be the case if the range of scores is
greater. A coefficient based on a restricted range, however, can be adjusted to provide an estimate of what
the coefficient would have been had the range been unrestricted. The formula for this adjustment is as
follows (adapted from Nunnally, 1978, pg. 241242):

2
meas

r
xx
(adjusted) = 1


2

x
u






Where:

2
meas

= standard error of measurement, squared


2

x
u



= variance of cases in an unrestricted sample

The reliability of job dimension scores, as summarized from studies from forty-three organizations is
given in Table 4 (Mecham, 1988a). Although it was not clearly evident from the original data that the
separate analyses of the same job were by different analysts (i.e., inter-analyst reliability), this was
undoubtedly the situation in most cases.

TABLE 4

High, Mid-range, and Low Dimension Reliability Coefficients
and Corresponding Standard Errors of Measurement from 43 Studies
(19,961 Analyst Pairs)
1



DIMENSION RELIABILITY COEFFICIENTS
2
STANDARD ERROR OF MEASUREMENT
High High
Quartile
Median Low
Quartile
Low Low Low
Quartile
Median High
Quartile
High
1
2
3
4
5
6
7
8
9
10
1.00
.99
1.00
.99
1.00
1.00
1.00
1.00
1.00
.99
.98
.94
.96
.94
.98
.97
.96
.93
.95
.94
.94
.87
.90
.90
.93
.94
.91
.88
.92
.87
.83
.77
.82
.82
.88
.80
.87
.78
.75
.80
.00
.41
.31
.00
.00
.38
.69
.34
.21
.40
.050
.109
.067
.117
.084
.076
.086
.019
.061
.119
.168
.259
.223
.246
.159
.185
.221
.277
.230
.264
.255
.374
.320
.322
.275
.247
.303
.352
.293
.365
.413
.484
.434
.426
.355
.450
.374
.474
.504
.457
1.113
.771
.832
1.048
1.004
.791
.559
.814
.889
.778

19
TABLE 4 (CONTINUED)


DIMENSION RELIABILITY COEFFICIENTS
2
STANDARD ERROR OF MEASUREMENT
High High
Quartile
Median Low
Quartile
Low Low Low
Quartile
Median High
Quartile
High
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
1.00
1.00
1.00
1.00
.99
.99
1.00
1.00
1.00
.99
1.00
1.00
1.00
1.00
1.01
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
.99
1.00
1.00
1.00
.98
.97
.97
.99
.94
.93
.97
.98
.98
.92
.96
.99
.98
.96
.99
.98
.98
.99
.99
.95
.95
.98
.99
.98
.94
.96
.98
.98
.96
.99
.94
.96
.95
.97
.97
.94
.90
.94
.97
.88
.86
.95
.96
.95
.84
.92
.93
.94
.90
.88
.96
.86
.99
.95
.90
.86
.93
.97
.96
.88
.89
.94
.91
.92
.96
.90
.91
.89
.91
.89
.84
.78
.85
.87
.78
.76
.91
.88
.88
.66
.78
.79
.89
.61
.60
.90
.71
.83
.76
.83
.79
.86
.95
.89
.82
.81
.89
.77
.84
.88
.82
.84
.82
.74
.79
.51
.22
.07
.06
.00
.27
.60
.27
.00
.11
.00
.00
.39
.00
.00
.69
.33
.00
.00
.54
.28
.02
.76
.00
.52
.00
.46
.00
.52
.54
.43
.19
.42
.01
.43
.040
.059
.080
.047
.123
.132
.068
.077
.057
.137
.064
.033
.062
.028
.000
.023
.018
.025
.020
.098
.072
.063
.049
.045
.064
.078
.066
.035
.069
.043
.097
.102
.079
.060
.049
.160
.200
.191
.110
.252
.278
.184
.162
.159
.289
.205
.103
.168
.207
.128
.159
.159
.101
.117
.234
.233
.168
.120
.148
.253
.210
.169
.164
.222
.141
.252
.213
.241
.178
.183
.264
.320
.254
.198
.360
.383
.237
.213
.241
.406
.297
.277
.256
.328
.360
.222
.380
.126
.225
.318
.377
.265
.184
.202
.354
.336
.262
.304
.292
.220
.328
.307
.337
.306
.334
.402
.472
.389
.362
.474
.497
.312
.352
.355
.584
.478
.469
.337
.632
.634
.317
.542
.414
.497
.423
.465
.380
.229
.335
.428
.446
.343
.483
.405
.358
.431
.404
.428
.510
.467
.703
.885
.968
.971
1.015
.859
.635
.857
1.295
.946
1.012
1.032
.786
1.384
1.140
.563
.823
1.179
1.296
.684
.851
.991
.497
1.132
.696
1.132
.740
1.050
.699
.679
.758
.904
.767
.998
.755

1
Studies taken from the PAQ Master Data Base representing the following industry sectors:
Chemicals and Petroleum, Communications, Financial and Real Estate Services, Government,
Health Care, Manufacturing, Processing, Public Utilities, Social Services, Trade and
Distribution, and Transportation.

2
Corrected for restriction of range.

Source: Mecham, 1988a

20
DERIVING PERSONNEL REQUIREMENTS OF JOBS WITH THE PAQ


The nature of the job-related data that can be obtained with the PAQ suggests certain potential
applications. One application is for deriving estimates of the personnel requirements of jobs, i.e., the
nature of the human attributes required for jobs. Personnel requirements for any job certainly need to be
valid, in that they should make it possible to select job candidates who, in general, would be expected to
be successful on the job in question. The subsequent discussion of methods of test validation applies to
virtually all types of human attributes covered by personnel requirements, including aptitudes,
achievements, personality characteristics, biographical data (bio-data), etc.

Discussion of Test Validity

The concept of validity is complicated and cannot be defined in a single statement; there are many facets
of this concept. The Society for Industrial and Organizational Psychology (1987) sets forth three types of
validity, as follows:

1. Content Validity (the extent to which a test measures some domain of knowledge or
behavior).
2. Construct Validity (the extent to which a test measures some basic, underlying human
attribute or quality, i.e., a construct).
3. Criterion-Related Validity (the extent to which test scores are related to a separate criterion
of interest, such as job performance).


Methods of Test Validation in Personnel Selection

Test validation in personnel selection and placement often involves some form of criterion-related
validity. Three general approaches are discussed below.

Traditional Methods of Determining Test Validity. The traditional methods of determining the
criterion-related validity of tests are as follows: (1) concurrent validity procedures (in which tests are
administered to present employees on a job with the test scores then being correlated with an appropriate
criterion); or (2) predictive validity procedures (in which tests are administered to candidates for a job
with the test scores being correlated with a criterion subsequently obtained).

Job Component Validity. The traditional methods discussed above have been criticized for two basic
reasons: (1) such procedures frequently are not practical because of such factors as limited sample sizes,
the time required and the costs involved; (2) it seems that there should be some general basis for
determining the types of human attributes (as measured by tests) required for each of various types of
work activities. These arguments served as a primary objective in the development of the PAQ, and led to
the crystallization of the concept of job component validity. This is essentially a variant of the concept of
synthetic validity proposed by Lawshe (Lawshe & Balma, 1966).

The job component validity model is based on the hypothesis that, when different jobs have in common a
given job component, the human requirements for fulfilling that component would be the same in all jobs
in which that component exists.

21
The development of a procedure for establishing the job component validity of tests or other measures of
human attributes would consist of the following: (1) some method of identifying the constituent
components of jobs; (2) a method for determining, for an experimental sample of jobs, the human
attribute(s) required for successful job performance when a given job component is common to several
jobs; and (3) some method of combining the estimates of human attributes required for individual job
components into an over-all estimate of the human attribute requirements for an entire job. The
background research dealing with job component validity as based on the PAQ is discussed in a later
section of this manual.

Validity Generalization. The concept of validity generalization is predicated, as is job component
validity, on the idea that jobs having similar characteristics will likely have similar personnel
requirements. Accordingly, if a test can be shown to be valid for one job, it should by inference, be valid
for another job with similar characteristics. This reasoning can be extended to families of similar jobs so
that if a test is valid for one or several jobs within the family it should be valid, to the degree that jobs
within the family are similar, to other jobs in the family. Furthermore, it is expected that by increasing the
sample size by combining results across studies within a family that the validity estimates will stabilize
and will increase in accuracy.

Schmidt and Hunter (1977) developed a theoretical and statistical framework for validity generalization
which has led them to conclude that test validities can be generalized across broad groups of jobs. In one
analysis, Hunter (U.S. Department of Labor, 1983) summarized test validity data published by the U.S.
Employment Service for a set of 515 jobs. The results of his study implied that validity generalization is
so broad that almost all jobs can be grouped into just five categories based on job complexity (as
determined from job analyst ratings on the degree to which the worker must function with data, people
and things), with tests of only three types of abilities being necessary to tap the human aptitudes required
for jobs in all five categories (these being cognitive, perceptual, and psychomotor abilities). This
formulation provides for the use of a cognitive test for jobs in all five categories along with some
combination of perceptual and/or psychomotor tests for jobs in each of the five categories.

The subsequent use of Hunters formulation by the U.S. Employment Service in many locations within
the United States has led to a critique of its appropriateness by the National Academy of Sciences
(Hartigan & Wigdor, 1989). The use of this approach by an agency of the Federal Government and the
support given it by the National Academy report (Hartigan & Wigdor, p. 281) also appears to have done
much to legitimize it in the minds of personnel selection specialists. Continuing research and legal
findings might be expected to clarify the appropriate ways in which to apply the generalized validity
methodology.


Job Component Validity Based on the PAQ

As indicated earlier, the PAQ consists of job elements of a worker-oriented type which reflect the types
of human behaviors required in the performance of jobs. Because of the human behavior orientation of
the PAQ job elements, it is possible that these elements might be used as the basis for a job component
validity approach (McCormick, 1959). Thus, it is believed that the PAQ meets the first requirement of the
job component validity concept discussed earlier, namely the need for identifying relevant constituent
components of jobs.

22
The second requirement involves the identification of the human attributes that presumably are required
for the successful performance of jobs that have any given job component in common. In the simplest
case, such aptitude requirements would be expressed in terms of test scores. There are different possible
procedures for determining the attribute requirements of job components that are in common to a number
of different jobs. Perhaps the most direct approaches involve the collection and analysis of test data for
individuals on each of many different jobs whose job components can be quantified. The use of such data
generally is predicated upon the assumption that people tend to gravitate (drift or selectively
migrate) into those jobs that are commensurate with their own levels of some given attribute. Abundant
evidence of the differences between distributions of test scores by occupation is found in Yerkes (1921),
Strong (1943, Chap. 7 & 8), Harrell and Harrell (1945), Stewart (1947), Schaie (1958), Thorndike and
Hagen (1959, pp. 2734), Tyler (1965, pp. 335340), U.S. Department of Labor, (1970, Table 92), E.F.
Wonderlic and Associates, Inc. (1983), Myers and McCaulley (1985, Appendix D), . Furthermore, an
analysis of the sample means and standard deviations from 742 validity studies performed using the nine
aptitudes of the General Aptitude Test Battery (GATB) revealed that the distributions of reported mean
scores for jobs had a low probability (p < .001) of resulting from a random selection of scores found in
the general working population. Additionally, the reported standard deviations were smaller for each
aptitude (except manual dexterity) than those expected using random selection from within the general
working population (p < .001) (Mecham, McCormick, & Jeanneret, 1984).

Taken together, this evidence has led the authors to conclude that each job has associated attribute
bands indicated by characteristic means and standard deviations. Such bands are related to the nature
of the job itself and the processes by which individuals get into such jobs. In this regard, however, the
presence of employees who possess a high degree of a given attribute on a given job does not necessarily
mean that the attribute is required for satisfactory job performance. A given attribute might not itself be
required for a given job, but might be correlated with some other attribute that is required by the job. The
converse, that persons might have lower than minimally required levels of an attribute, is less likely, as
they would not be expected to remain on the job for any length of time. Although such possibilities exist,
their effects have not been found to materially limit the potentialities of the job component validity
approach.

When some estimate of the attribute(s) required for any given job component is known, it is then possible
to build up the total requirements for the job, by knowing what components exist in any given job. This
would fulfill the third requirement of the job component validity strategy.

The use of the PAQ as the basis for testing the job component validity approach is primarily rooted in the
use of job dimension scores for jobs. The basic approach has consisted of determining the relationship
between the job dimension scores for a sample of jobs and certain sets of test data for incumbents on such
jobs. The test data for the job incumbents are used as a criterion of the importance to the respective
jobs of the attributes measured by the tests. The PAQ job dimension scores may then be used as
predictors of the related test data for the jobs for which PAQ data have been collected.

Job Component Validity Analyses with GATB Tests. An important analysis of the job component
validity concept has involved the use of the nine tests of the General Aptitude Test Battery (GATB) of the
United States Employment Service (USES), these data being available for samples of job trainees or
incumbents in hundreds of different jobs. The test data available for each job includes the mean test score
of the incumbents, the standard deviations of the incumbent scores, and validity coefficients. In addition,
23
for many of the USES studies, specific tests were recommended for use in selection because they met
some combination of the following criteria: the aptitude was judged to be important by job analysts; the
test scores were relatively high in comparison with scores on the other tests for the sample studied; the
standard deviation was low, a significant validity coefficient was found; and the test, in combination with
other tests evaluated, aided in distinguishing the more from the less productive workers (United States
Department of Labor, 1970, pp. 4794). Tests so identified are part of the Specific Aptitude Test Battery
(SATB) recommended for the job studied. (In recent years, since the adoption of Hunters validity
generalization methodology by the USES, SATBs are no longer determined nor are job specific validity
studies conducted. Instead jobs are assigned on the basis of complexity to one of the five categories
mentioned before, and the category regression equations are used to estimate the proficiency of job
applicants in all job categories. Applicants, classified by ethnic group membership, who have the highest
percentile scores within ethnic group and job category are then referred for available openings.)

With the continued growth in both the GATB and PAQ databases, it has been possible to conduct
successive studies using samples of increasing sizes. Three major studies have now been conducted
(McCormick, et. al. 1972; McCormick, Mecham & Jeanneret, 1977, pp. 1114, and the study reported in
this manual). In these different analyses various criteria have been used as indices of the relative
importance of the test construct to individual jobs. These have included for each of the nine GATB
constructs (tests) for each job studied: (1) the mean test score of the incumbents on the job; (2) the test
score one standard deviation (r) below the mean (referred to as 1 SD below the mean, or Mean
SD or as a possible cutoff score); (3) the validity coefficient; (4) the standard deviation of the test
score; and (5) whether the test was selected to be part of the SATB for the job.

The rationale for the use of mean test scores of job incumbents as the criterion of the relative importance
of one or more tests for personnel selection for various jobs is predicated on the assumption, mentioned
earlier, that people tend to gravitate into those jobs that are commensurate with their own aptitudes or
other attributes. (This is sometimes called a natural selection theory.) Thus for a given test, high mean
test scores of people on various jobs could imply that those jobs require high levels of the attribute
measured, and low scores would imply the opposite.

The rationale for the use of the 1 Standard Deviation below the mean criterion is essentially the same as
with the use of the mean test score criterion, except that the 1 Standard Deviation below the mean
criterion would generally more nearly approximate typical test cutoff scores for personnel selection by the
U.S. Employment Security Office. (It is, of course, recognized that test cutoff scores vary considerably,
depending upon labor market conditions, but at the same time these values are more typical of cutoff
scores that have been used by the U.S. Employment Security Office. The use of such cutoff scores for the
combination of tests found in SATBs would have typically resulted in not hiring about one-third of those
presently employed in the jobs studied.) The possible rationale for the use of coefficients of validity as the
criterion of the relative importance of tests for personnel selection is predicated on the assumption that the
magnitude of validity coefficients indicates the relative importance to job success of the attribute
measured by the test. The fact that a test was included as part of the SATB is likewise an indication of the
usefulness of the test both on judgmental and empirical grounds. Additionally, in some studies, the
standard deviation was included as a criterion as it indicates the range of scores typical of those
individuals holding the job.

In the most recent job component validity analyses with the GATB (Mecham, 1987), PAQ job analyses
that matched a total of 460 jobs for which test data were available from the USES were also found in the
PAQ data bank on the basis of common 9-digit Dictionary of Occupational Titles (DOT) (U.S.

24
Department of Labor, 1977) numbers. For many of these jobs several independent PAQ analyses were
available, and in such instances a single composite PAQ analysis was derived for all of the PAQ
analyses having the same 9-digit DOT code.

These composite or averaged PAQ analyses as well as individual analyses (where only a single PAQ
record was available for a DOT code) were then used as predictors of the test-related criteria given in
Table 5. The table presents shrunken multiple correlation coefficients for all nine of the GATB tests.
(Shrunken correlation coefficients are adjusted downward from the original correlation coefficients to
adjust for possible chance inter-relationships that might produce spuriously high correlations and estimate
the level of prediction that one would expect within the population.)


TABLE 5

Multiple Correlations of PAQ Overall Job Dimension Scores with GATB Test Criteria
and Percentage of Agreement with Test Inclusion in a
Specific Aptitude Test Battery (SATB)

(N = 460 Validity Studies)


GATB
Test
Correlation
1

with Mean
Test Scores
Correlation
with Validity
Coefficients
Percentage
Agreement
with Inclusion
in SATB
2

G:
V:
N:
S:
P:
Q:
K:
F:
M:
Intelligence
Verbal Aptitude
Numerical Aptitude
Spatial Aptitude
Form Perception
Clerical Perception
Motor Coordination
Finger Dexterity
Manual Dexterity
.77
.78
.75
.72
.61
.70
.67
.45
.24
.41
.31
.29
.38
.22
.19
.20
.22
.33
80
82
69
78
69
72
74
79
73


Source: Mecham, 1987




1
The results reported use data remaining after an initial set of regression equations were
calculated and cases were removed which yielded large differences between actual and predicted results
(i.e. there were extreme residuals of 3 or more Z-score units outside of the predicted value on one or more
of the test predictions). This procedure was followed to reduce the chances that the operational regression
weights would be unduly influenced by extreme cases (which might have occurred because of miss-
classification of PAQ data, data entry errors, etc.). The results from the first study of unrestricted data
averaged .02 lower than those reported here but in NO case were in excess of .04. The sample of validity
25
studies represents 298 different SATBs; that is, some of the studies were replications of earlier
studies or several different jobs were classified under the same SATB study for which more than one
PAQ profile was available. Combining test validation data and PAQ data within the same SATB
designations and computing a new set of equations resulted in very similar results. The non-combined
data however, involved more cases and were used because more predictors were typically brought into the
equation which was thought to add some stability to predictions from PAQ data submitted by the typical
user.

2
Multiple discriminant analysis was used to predict the probability that a test would be chosen as
part of the Specific Aptitude Test Battery (SATB) for the job. SATB and PAQ data were available for
236 validity studies. Reported are the percentage of cases from those studied that were correctly
classified.

Note: A study (Mecham, 1988b) was done to determine the equivalence of the predictions from the
1977 equations and those derived in 1987. Predictions of mean test scores from the two sets of equations
were correlated across 2485 average PAQ profiles taken from the master database. The correlations were
highest for the cognitive tests (.90 to .94), high for the perceptual tests (.80 to .91) and moderate for the
dexterity tests (.27 to .85). Somewhat lower correlations between the 1977 and 1987 data sets were found
when the validity coefficients were the criteria.



It is clear from Table 5 that the predictions of mean GATB test scores is much better than the prediction
of the size of the validity coefficients. Further, the predictions of the mean scores of the cognitive tests
(G, V, and N) is best, followed by the perceptual tests (S, P, and Q). The prediction of the dexterity
tests (F and M) were lowest. (It should be added that the correlations with scores 1 Standard Deviation
below the mean were very similar to those for the mean test scores. Thus, results for 1 Standard Deviation
below the mean are not given.)

The use of combinations of job dimension scores to identify the tests used by the USES in their Specific
Aptitude Test Battery (SATB) is also given in Table 5. The percentages of agreement are generally quite
high, being in the 70s and 80s.

Job Component Validity with Commercially Available Tests. With the viability of the job component
strategy demonstrated with the GATB data, the field application of such information has much potential
value to both employers and job applicants. However, because of a restricted use policy by the U.S.
Employment Security Office (USES), private employers in the U.S. are not permitted to administer the
GATB or to receive the necessary test scores from job applicants to apply GATB predictions directly.
(The GATB is available for purchase however, to qualified users in Canada). Consequently, two
strategies were followed to make the results useful to employers who do not use the services of the USES.
First, commercially available tests that measured some GATB constructs were identified, and, secondly
data on commercially available tests which could be used directly in a job component validity study were
obtained.

Identification of Commercial Tests that Measure GATB Constructs. McCormick, DeNisi & Shaw
(1979) obtained test data for incumbents on 202 jobs from various sources, including data furnished
directly by a number of private and public organizations, and some data from published test validation
studies. Wherever PAQ analyses could not be obtained for the jobs from the collaborating organizations,
PAQ analyses for the same jobs in other organizations were drawn from the PAQ data bank.
26
There was admittedly a problem in matching certain of the commercially available tests with the GATB
tests, but in general terms this was accomplished by the researchers on the basis of the judged similarity
of test content. Given the test(s) that were so matched with any given GATB test, it was then necessary
to convert the scores on each such test to a common metric. The metric used was a standard score system
with a mean of 100 and a standard deviation of 20 (the same as the metric used with the GATB tests). A
major problem in this conversion was that of obtaining an appropriate set of norms (the type of sample on
which the GATB test norms are based). Normative data for such populations were not available for most
of the commercially available tests, so it was necessary in some cases to build up such norms from two
or more different samples.

It must be recognized that the matching of commercially available tests to the GATB tests involved a
certain amount of judgment, and that the subsequent step of converting scores on such tests to a common
metric also involved judgment. [Recognizing these limitations, however, the results of these processes are
given in Table 14 in the Appendix. That table shows, for each test construct, the common metric that
was used (a mean of 100 and a standard deviation of 20), and the scores on the tests which were
considered as matching that construct. Further, it gives the raw scores on each such test which were
equated to the scores of the common metric.]

The conversion of scores on the various commercially available tests to the standard score system was
necessary in order to be able to derive the criteria of mean test scores and 1 Standard Deviation below
the mean for the incumbents on the various jobs in the sample. Once the standardized criterion values
were derived for the individual jobs in the sample, the regression equations developed for each of the
GATB tests (McCormick, et al., 1977) for the three criteria (mean test score, the value 1 Standard
Deviation below the mean, and validity coefficient) were applied to the jobs in the sample to obtain
predicted criterion values. Those predicted criterion values were then correlated with the actual criterion
values. Such analyses were carried out for five of the constructs represented by the GATB tests, since
adequate data were not available for the other constructs. The resulting correlations are given in Table 6.

27
TABLE 6

Correlations Between Predicted and Actual Criteria for Five
Constructs as Measured by Various Commercial Tests


CRITERION


Construct

Mean Test
Scores

1 SD Below
The Mean

Validity
Coefficients

N
G :
V :
N :
S :
Q :
Intelligence
Verbal
Numerical
Spatial
Clerical
Average
.74***
.71***
.67***
.74***
.53*
.66***
.66***
.71***
.63***
.76***
.60**
.68***

.30
.29**
.27
-.02
33/
50/36
64/76
26/43
15/29


* Significant, p < .05
** Significant, p < .01
*** Significant, p < .001

+ The N at the left applies to the first two criteria; the N at the right applies to the third. (There were
not enough jobs with validity coefficients for the G construct to report.)

In an additional study conducted by Mecham (l988c), tests from the Differential Aptitude Battery (DAT)
which correlated well with certain of the GATB tests were identified. Combining correlation coefficients
from between 5 and 11 studies (N = 801 to 1772 persons) yielded the following results: GATB Verbal
with DAT Verbal Reasoning (average r = .71), Language Usage - Spelling (.65), Language Usage -
Sentences (.71); GATB Numerical with DAT Numerical Ability (.63); GATB - Spatial with DAT Space
Relations (.65); GATB - Clerical with DAT Clerical Speed and Accuracy (.56).

In the case of cognitive tests used to predict GATB constructs, some confirmatory data can be found in a
job component validity study involving the Wonderlic Personnel Test (WPT). The WPT is a test of
general intelligence. The study (Mecham, 1988d) involved the correlation of PAQ and WPT test data
from over 100 jobs and over 200,000 people. The WPT data were those reported for 1970 and 1983
normative samples (Wonderlic Personnel Test Manual, 1983, Table 4, p. 13). The multiple correlations of
relevant PAQ overall job dimension scores with the median WPT test data for these 100+ occupations
are given in Table 7.
28
TABLE 7

Multiple Correlations Between PAQ Overall Dimensions
and Median Occupational Scores on the Wonderlic Personnel Test

Date of Normative Study Correlation
1970
1983
Combined
.81
.79
.82


Source: Mecham, 1988d


Identification of Commercial Tests That Measure Personality Variables. Although the PAQ has been
used primarily to derive estimates of the aptitude requirements of jobs, some data reflect the possibility
of deriving estimates of other human attributes, such as personality variables.

One such study (Mecham, 1988e) involved the use of the Myers-Briggs Type Indicator (MBTI). The
MBTI is a self-report personality instrument that measures basic preferences regarding perception and
judgment. It yields scores on four bi-polar indices: El-Extraversion vs. Introversion; SN Sensing vs.
Intuitive perception; TF Thinking vs. Feeling judgment; JP Judgment vs. Perception. People are
usually classified by indicating the poles on each index for which they received the highest scores. For
example, the designation ESTP indicates preferences for extraversion, sensing, thinking, and perception.

These scores tend to be associated with occupational choice and the percentages of persons within
occupations who report a preference for each of the dichotomous poles of each index, as well as for the
various possible combinations of the indices have been published (Myers, & McCaulley, 1985, Appendix
D).

The procedures used in this study will not be described in detail, but involved the use of MBTI test data
for 26,298 persons on 95 occupations. The multiple correlation coefficients of relevant overall PAQ job
dimension scores with test data from the MBTI are given in Table 8. (Specifically, these are correlations
of the PAQ data with the percentages of persons in specific occupations who obtained their highest scores
on specified poles of the MBTI indices.)

29
TABLE 8

Multiple Correlations Between PAQ Overall Dimension Scores and
The Percentage of Persons in Occupations by Personality Indices


MBTI Index Correlation
Extraversion vs. Introversion
Sensing vs. Intuitive
Thinking vs. Feeling
Judgment vs. Perception
.51
.55
.57
.57


Source: Mecham, 1988e


Although the MBTI multiple correlation coefficients are not as high as those found with most aptitude
tests, they do support the contention that the PAQ job dimension scores have considerable potential for
deriving estimates of personality requirements for jobs.

Comparative Effectiveness of Situationally Specific, Job Component, and Generalized Validation
Methods

How effective is the job component validation method for identifying tests to be used in personnel
selection and the test values associated with high levels of employee job performance? One practical way
to answer this question is to identify existing acceptable validation methods, obtain results using them,
and then compare the results obtained with job component validity results. This approach was used in a
study (Mecham, 1985) which drew data from among 742 criterion-related validity studies conducted by
the U.S. Employment Service (USES) using the GATB.

In the study, normative test (averages and standard deviations of scores around the averages) and validity
data were identified which permitted the effectiveness of the three validation methods to be estimated and
compared. The first method to be researched was the criterion-related and situationally specific validation
(SV) method. As this method has a history of professional and legal acceptance (when certain
measurement assumptions are met), it was considered to be the standard against which the other methods
could be compared.

To determine the effectiveness of SV, studies from the USES validity study database were identified for
which normative and validity data were available from two or more samples of employees/trainees on the
same or a very similar job. This search resulted in finding 175 jobs for which there were two or more such
studies.

The rationale behind this part of the study was that when a SV study is conducted in the practical
situation, the results are gathered on one sample of persons and then applied to another sample (those who
are then selected, based on the earlier results). It could be expected, therefore, that by comparing the
results from one sample with those of another sample one could determine the approximate effectiveness
of this approach in the practical setting.
30
To determine the effectiveness of the first validity studies for jobs in predicting the results of the second
studies, the respective values from the first and second studies for each job were correlated across all 175
jobs. (For example, if the average score on test G was 100 in the first study, and in the second study it was
97, these values, along with those obtained for all 175 jobs would be correlated together.) This procedure
was used for the averages of each of the nine tests of the GATB, for the unadjusted validity coefficients,
and for the standard deviations of scores about the averages. The results are reported under the SV
columns in Table 9.

Next, multiple regression equations were developed using the job component validity (JCV) method on a
sample (A) of 194 jobs and then applied to a holdout sample (B) of 193 jobs. This process was also
reversed and the equations developed on sample B were used to predict values in sample A. Averaging
the correlation coefficients obtained between predicted and actual values resulted in the coefficients
reported in the JCV columns of Table 9.

Finally, the validity generalization (VG) approach developed by Hunter (U.S. Department of Labor,
1983) and applied by the U.S. Employment Service in a number of states was used to predict the values
on 226 validity studies added to the GATB database after Hunter had formed his original job families and
developed estimates of average validity coefficients for those families. Additionally, mean test averages
and standard deviations were calculated within the original families and used as estimates of values in the
holdout sample. The results are found in the VG columns of Table 9.


TABLE 9

Correlation of Estimated Values with Values in Holdout Samples by
Validation Method Used to Make Estimates


Averages Validity Coef. Std. Dev.s
Tests
SV JCV VG SV JCV VG SV JCV VG
G-Intelligence
V-Verbal
N-Numerical
S-Spatial
P-Form Perception
Q-Clerical Perception
K-Motor Coordination
F-Finger Dexterity
M-Manual Dexterity
.77
.80
.77
.74
.63
.66
.73
.54
.40
.77
.75
.75
.75
.62
.66
.70
.49
.30
.65
.60
.61
.65
.45
.39
.29
.02
.09
.15
.14
.08
.06
.11
.07
.12
.16
.26
.24
.17
.19
.19
-.08
-.05
.14
.08
.21
.17
.19
.15
.03
-.02
-.06
-.03
.11
.24
.16
.10
.23
.12
.15
.16
.07
.06
.15
.09
.09
.26
.26
.07
.16
.19
-.05
.01
.08
.08
.38
-.01
.32
.23
.06
-.03
.09



Note: SV = Situationally specific validity; JCV = Job Component Validity;
VG = Hunters Validity Generalization method. This table is from data found in Tables 1, 4, and 6
of Mecham, 1985.

While somewhat different samples were used by necessity in order to test each of the validation methods,
thereby precluding a definitive statement about which methods are most effective, the data do seem to
31
indicate that the SV and JCV methods were basically comparable in effectiveness. None of the methods
however, were very effective at predicting variations between jobs in validity coefficients or standard
deviations for specific tests. It should be noted however that the means of the validity coefficients for all
tests were positive (rather than zero), indicating that the tests were generally useful predictors of
employee job performance. Additionally, while there was little to be gained by predicting standard
deviations, the variation of the standard deviations was small across jobs for particular tests (although the
variations were relatively large between tests).

From a practical standpoint, these results suggest that JCV can be used about as effectively as the
traditional situationally specific validation method and with as good or better effect than the VG method
in use in a number of states by the Employment Security Office. Given this comparability and the fact
that it can be used on jobs with small numbers of employees (for which SV studies cannot be performed
because of sample size limitations), the JCV method holds the promise of effective job-related and
scientific personnel selection. The same can be said, but with somewhat less confidence, about VG as it
was implemented by Hunter and the USES. The use of more sensitive measures of job content (such as
the PAQ) to yield more homogeneous groupings of job into families may well improve VG predictions,
however.

Research by Gutenberg, Arvey, Osborn and Jeanneret (1983) has offered some support, as well as raised
some questions, regarding the Hunter recommendations. These researchers found that the information-
processing/decision-making PAQ dimensions moderated the validities of various ability tests. In
particular, information-processing/decision-making (Hunters complexity dimension) moderate
validities for mental ability tests (with positive correlations) and for finger and manual dexterity tests
(with negative correlations). However, physically and manually oriented PAQ dimensions had no
moderating effects for either cognitive, perceptual or psychomotor test validities. Thus validity studies of
jobs that have a considerable amount of information-processing and decision-making will likely have
higher validity coefficients for tests of mental ability (i.e., verbal and numerical reasoning) than for tests
of psychomotor abilities (i.e., dexterity). Further, jobs that require little in the way of information-
processing and decision making would be expected to have lower validity for mental ability tests. If such
jobs are more manual or physical in nature then tests of psychomotor abilities should be expected to have
stronger validity relationships. This latter finding by Gutenberg, et al., supports the situational specificity
hypothesis regarding test validities and is somewhat contrary to arguments by Hunter that a cognitive test
is valid for jobs with all levels of complexity.


PAQ Computer Outputs of Job Component Validity Data

Computer generated estimates of predicted test data are provided by using the regression equations
developed in the studies previously described. When PAQ data are scored, the resulting predictions
may be in hardcopy form, recorded on a computer readable medium, or transmitted over a
communication carrier (such as telephone lines) from one computer to another. While the amount of
information varies to some extent as a function of the method of transmittal, for each test of the GATB an
estimate is given in terms of a low (1 standard deviation below the average), average (mean), and high (1
standard deviation above the average) score. This attribute band (going from the low to the high score)
is where approximately 68% of job incumbent GATB scores are expected to be found. (The accuracy of
such estimates are, of course, a function of several factors, including the reliability and validity of the
PAQ ratings and the accuracy with which the regression equations capture and weight the appropriate
PAQ dimensions, etc.)
32
Also provided are estimates of the raw (uncorrected) validity coefficients (which are almost always lower
than the true correlations between test scores and job performance). Additionally, the probability that
each test would have been useful in selecting persons for jobs with the content of the type reported in the
scored PAQ is given. The three tests having the highest probability of use are marked (with a <
character) as are any other tests for which the estimated probability of use is 50% or greater. (Table 5
indicates the accuracy with which these different types of data were predicted in the research study from
which the regression equations were derived, and the previous section deals with the accuracy of such
predictions in comparison with those resulting from other validation methods.)

In using the attribute band information, it may be useful to note that the low score, being about one
standard deviation below the average, has historically been used by the U.S. Employment Security Office
as a minimum or cutoff score for tests found to be useful in the selection of persons for a job. The use
of one such score can be expected to eliminate about 16% of those who would normally be job
incumbents. The use of such cutoff scores for three tests will normally eliminate about 33% of persons
who otherwise would be job incumbents (because the scores from the GATB are somewhat inter-
correlated). Hiring persons who have scored substantially higher than the high scores within the band may
well lead to unwanted turnover, as over-qualified incumbents frequently leave jobs that to them become
tedious and boring (Wonderlic, Long, & Shaeffer, 1984). While such movement between jobs may be
acceptable or even desirable if it occurs within the organization, it may also involve substantial
unrecovered costs, if opportunities for upward mobility are not adequate and the individual leaves the
organization.

Because the expected relationship between test scores and job success is indicated in terms of the
predicted validity coefficient, the higher the validity coefficient, the more likely it is that high scoring
individuals (keeping in mind the caveat about unwanted turnover indicated earlier) will be more effective
on the job. An additional point to watch concerns cases in which individuals score substantially higher
than the expected aptitude band on tests which were not identified as being particularly useful for
selecting persons for the job. In these cases, they may well discover that they can perform better on a job
which utilizes such abilities and may well selectively migrate to such a job. This is most likely to be the
case for aptitudes which are cognitive in nature, as such aptitudes tend to be strongly correlated with job
prestige and income. Notwithstanding this fact, the most careful attention to matching persons with jobs
should be directed to the use of those tests which have the highest probability of being useful (as such
tests are expected to be most predictive of the persons ability to successfully perform the job). In the
final analysis one must also recognize that labor market conditions will frequently influence the decisions
made, and that many factors other than aptitudes influence job performance.

If the hardcopy report is obtained, similar tests identified earlier for many of the GATB constructs are
listed. In most cases, attribute bands for these tests are found adjacent to the GATB construct they are
believed to substantially measure. If only GATB predictions are available, the appropriate conversion to a
similar test score can be made using the data in Table 14 in the Appendix.

A predicted aptitude band is also estimated for the Wonderlic Personnel Test. This band differs from that
of the GATB band in that it is derived from data obtained from job applicants (rather than trainees or
incumbents). The low score is one predicted standard deviation below the predicted average (actually the
median) score of applicants. See the Wonderlic Manual for recommendations on setting cutoff scores
(E.F. Wonderlic and Associates, 1983).
33
Predicted MBTI data take the form of predictions of the percentage of workers on a job who will score
highest on each of the four bi-polar indices. By multiplying together the percentages that are found for the
pole on each index for which the applicant has the highest score, an estimate of the percentage of persons
on the job with that MBTI profile can be obtained.

Attribute Ratings of PAQ Job Elements

In addition to the use of the job component validity approach for deriving personnel requirements for
jobs, PAQ-based data can be used in another manner for this purpose. This procedure is based on the use
of attribute ratings of the PAQ job elements. After the PAQ had been developed, arrangements were
made for obtaining ratings of 76 human attributes that were judged to be relevant for the performance of
the job activities or for tolerance of the conditions indicated by the individual PAQ job elements. Twenty-
seven of the attributes were of a personality or temperament nature, and the remaining 49 were of an
aptitude nature (Marquardt & McCormick, 1972). The following rating scale was used for the purpose of
linking the attributes with the PAQ job elements:


Rating Degree of Relevance
0 No relevance
1 Very limited relevance
2 Limited relevance
3 Moderate relevance
4 Substantial relevance
5 Very substantial relevance


Ratings were obtained from persons who were considered to be experts in personnel psychology,
primarily industrial psychologists. Eight or more experts rated each attribute in terms of its relevance to
each PAQ job element. The median rating of each attribute for each job element was derived. These
medians served as the basis for a matrix of 76 attributes and 187 job elements.

While there are many possible ways in which attribute relevance ratings can be combined with PAQ item
responses to arrive at a composite attribute rating for a job (Shaw & McCormick, 1976), the following
method (suggested by Jeanneret) has both logical appeal and empirical support. First, only PAQ items
having a response of 3 or more are considered (on the premise that these are items which define the core
nature of the job). Secondly, a weighted average of the relevance ratings for such items is calculated on
each attribute using the following formula:

AI
Attribute Score =
I



Where: A = Attribute relevance rating

I = PAQ item response (if 3 or above)
34
This formula, when applied across all 76 attributes yields a weighted average relevance score for each.
By comparing the score on each attribute with the distribution of such scores found in the normative
sample of 2200 jobs used for the factor analysis, a percentile score is also determined for each attribute as
it applies to each job.

To check the value of such attribute estimates, they have been correlated with other measures of the
degree to which the attributes were present in incumbents on several occasions on various jobs (Mecham
& McCormick, 1969b; Shaw & McCormick, 1976; Carter & Biersner, 1987).

Recent research has examined the predictive power of scores and percentiles derived in the manner
described above. Carter & Biersner (1987), in a study of 25 U.S. Navy jobs found that the attribute ratings
of mental abilities correlated significantly with mean test scores on the Armed Services Vocational
Aptitude Battery (ASVAB) in 11 of the expected 15 cases (mean r = .36). They also found that the
physical strength attribute ratings correlated well with strength requirements determined using a different
methodology (biserial rs = .69 to .87).

In the largest study of this type to date (Mecham, 1989b) attribute ratings and percentiles from the master
PAQ database were matched with test data from incumbents on each of the occupations in the databases
used for the recent job component validity studies (described earlier) on the basis of common 9-digit DOT
numbers. Correlations between attribute scores and percentiles were computed with the test data judged to
measure the same or similar attributes. Specifically, attribute data for each occupation were correlated
with General Aptitude Test Battery (GATB) mean test scores, validity coefficients, and whether the test
was included as part of a Specific Aptitude Test Battery (SATB) for the job. Correlations were also
computed between attribute data and occupational median applicant scores on the Wonderlic Personnel
Test (WPT) and the percentages of persons in each occupation with high scores on each of four indices of
the Myers-Briggs Type Indicator (MBTI).

Because the correlation coefficients involving both the attribute score and the percentiles were very
similar, only the correlations involving the percentiles are presented. Table 10 shows the results obtained
for attributes of an aptitude type. Table 11 presents the results for attributes of an interest or temperament
nature.

35
TABLE 10

Correlations Between Percentiles of Ratings of Selected Attribute Requirements
of Occupations and of Test Data on Certain Attributes
of Incumbents on Similar Occupations



Mean Aptitude Test Scores

Attributes Tests
G V N S P Q K F M WPT
28. Verbal comprehension.
31. Numerical computation.
32. Arithmetic reasoning.
33. Convergent thinking.
34. Divergent thinking.
35. Intelligence.
39. Visual form perception.
42. Perceptual speed.
45. Spatial visualization.
57. Finger dexterity.
61. Manual dexterity.
65. Rate of arm movement.
66. Eye-hand coordination.
69. Simple reaction time.
.63
.69
.73
.66
.67
.70
-.16
-.01
-.34
-.39
-.48
-.60
-.54
-.52
.73
.68
.73
.73
.74
.75
-.32
-.16
-.50
-.46
-.57
-.67
-.62
-.60
.64
.70
.72
.67
.67
.69
-.20
-.03
-.37
-.39
-.48
-.60
-.54
-.51
.44
.59
.61
.48
.50
.53
.06
.16
-.11
-.21
-.27
-.40
-.34
-.38
.51
.55
.56
.52
.52
.54
-.12
.00
-.29
-.17
-.31
-.43
-.37
-.47
.63
.57
.59
.62
.61
.61
-.32
-.16
-.48
-.31
-.45
-.55
-.51
-.53
.59
.49
.52
.58
.57
.57
-.33
-.20
-.47
-.28
-.43
-.51
-.47
-.53
.23
.27
.25
.23
.23
.24
-.03
.02
-.13
.08
-.05
-.14
-.08
-.28
.07
.14
.14
.09
.09
.12
.10
.12
.03
.10
.04
-.03
.01
-.12
.73
.75
.84
.77
.76
.79
-.27
-.15
-.49
-.41
-.55
-.68
-.63
-.65
Validity Coefficients
28. Verbal comprehension.
31. Numerical computation.
32. Arithmetic reasoning.
33. Convergent thinking.
34. Divergent thinking.
35. Intelligence.
39. Visual form perception.
42. Perceptual speed.
45. Spatial visualization.
57. Finger dexterity.
61. Manual dexterity.
65. Rate of arm movement.
66. Eye-hand coordination.
69. Simple reaction time.
.24
.35
.33
.25
.26
.28
.02
.11
-.06
-.11
-.13
-.20
-.18
-.19
.27
.29
.29
.26
.27
.28
-.07
-.00
-.15
-.17
-.20
-.24
-.23
-.22
.20
.27
.25
.19
.20
.22
-.04
.04
-.08
-.12
-.13
-.17
-.16
-.14
-.10
.12
.05
-.06
-.06
-.04
.25
.27
.22
.17
.17
.11
.14
.04
-.10
-.05
-.08
-.10
-.10
-.11
.05
.05
.08
.09
.09
.10
.09
.04
.07
.08
.06
.06
.05
.04
-.09
-.04
-.07
-.09
-.09
-.07
-.09
-.07
-.10
-.15
-.14
-.10
-.11
-.13
-.06
-.08
-.01
-.01
.02
.07
.05
.05
-.20
-.19
-.20
-.20
-.21
-.21
.07
.02
.13
.12
.15
.18
.17
.14
-.26
-.30
-.29
-.24
-.26
-.28
.05
-.02
.13
.13
.17
.21
.20
.19


36
TABLE 10 (continued)


Inclusion in a Specific Aptitude Test Battery (SATB)

G V N S P Q K F M
28. Verbal comprehension.
31. Numerical computation.
32. Arithmetic reasoning.
33. Convergent thinking.
34. Divergent thinking.
35. Intelligence.
39. Visual form perception.
42. Perceptual speed.
45. Spatial visualization.
57. Finger dexterity.
61. Manual dexterity.
65. Rate of arm movement.
66. Eye-hand coordination.
69. Simple reaction time.

.53
.33
.40
.50
.51
.50
-.41
-.29
-.48
-.43
-.49
-.50
-.50
-.37
.41
.22
.28
.36
.38
.37
-.36
-.28
-.40
-.38
-.41
-.40
-.41
-.24
.24
.32
.31
.26
.27
.29
-.04
.02
-.09
-. 16
-.18
-.22
-.20
-.16
-.13
.14
.08
-.06
-.05
-.01
.38
.34
.32
.18
.18
.13
.16
.05
-.23
-.14
-.14
-.19
-.19
-.18
.24
.16
.24
.27
.25
.21
.22
.09
.30
.23
.23
.26
.25
.24
-.22
-.09
-.26
-.19
-.24
-.27
-.26
-.18
-.10
-.18
-.18
-.12
-.13
-.16
-.07
-.10
-.04
.09
.09
.09
.08
.03
-.36
-.33
-.36
-.36
-.38
-.38
.12
.03
.20
.28
.31
.36
.33
.26
-.49
-.46
-.47
-.48
-.49
-.49
.22
.10
.35
.27
.38
.45
.41
.39


Source: Mecham, 1989.


Note: Mean scores and validity coefficients were from 459 studies, SATB data from 236 studies. WPT
refers to the Wonderlic Personnel Test, a general measure of intelligence, with median applicant scores
reported for 108 studies. The underlined correlations are those of corresponding constructs represented by
the attributes and the tests. Tests from the General Aptitude Test Battery (GATB) are identified as
follows: G Intelligence; V Verbal; N Numerical; S Spatial; Q Clerical Perception; K Motor
Coordination; F Finger Dexterity; M Manual Dexterity



As can be seen, the correlation between mean cognitive test scores (G, V, N, WPT) and the attribute
percentiles was generally moderate to high and near zero or negative for the perceptual and psychomotor
tests. The correlations with validity coefficients, while low, were generally in the expected direction and
compare favorably to those found earlier when validity coefficients from an initial study were used to
predict those in a follow-up study (see Table 9, column 4). The correlations with whether a test was
included in a SATB were generally moderate and in the expected direction. Of special interest was the
fact that the attribute data were indicative of whether the finger and manual dexterity tests were found to
be useful in a selection battery (even though the mean scores correlated at a near zero level with attribute
data).

37
TABLE 11

Correlations Between Percentiles of Selected Attribute Requirements and of
Percentages of Incumbents in Similar Occupations with High Scores
on Specified Indices of the Myers-Briggs Type Indicator (MBTI)


Indices



Attributes
Extraversion
vs.
Introversion
Sensing
vs.
Intuition
Thinking
vs.
Feeling
Judgment
vs.
Perception
6. Dealing with people.
7. Social welfare.
8. Influencing people.
9. Directing/controlling/planning.
10. Empathy.
12. Conflicting/ambiguous information.
14. Sensory alertness.
15. Attainment of set standards.
18. Separation from family/home.
19. Stage presence.
21. Tangible/physical/end products.
22. Sensory/judgemental criteria.
23. Measurable/verifiable criteria.
24. Interpretation from personal viewpoint.
26. Dealing with concepts/information.
.21
.17
.19
.12
.18
.08
-.35
-.41
.24
.20
-.17
-.18
-. 16
.17
.08
-.29
-.28
-.30
-.35
-.31
-.37
.16
.11
-.32
-.27
.28
-.40
-.45
-.34
-.42
.09
.11
.11
.20
.11
.24
.03
.04
.17
.07
-.15
.34
.36
.15
.21
.40
.38
.42
.40
.41
.42
.05
.04
.34
.42
-.36
.44
.37
.39
.40


Note: Sample consisted of 96 occupations (r = .24 is significant at the .01 level using a one-tailed test).


Source: Mecham, 1989b


From Table 11 it can be seen that while most correlations with personality variables are modest, a number
are significant and tend to be in the expected direction.

Taken as a whole, the attribute scores seem to be indicative of many of the human attributes associated
with various occupations. While they are usually not as highly correlated with specific scores of job
incumbents on tests presumed to measure the same attributes as are most estimates based on job
component validity studies (compare Tables 10 and 11 with Table 5), they broaden the number of
constructs addressed and generally have some empirical support. They have also been applied in a
number of practical situations [see, for example, Jeannerets (1988a) development of requirements for
production operators and space mission specialist functions (Jeanneret, 1988b)].
38
USE OF THE PAQ IN JOB EVALUATION AND IN SETTING COMPENSATION RATES


One of the important practical applications of the PAQ is in job evaluation and in the setting of
compensation rates for jobs. Support for such an application came from the results of a study with the
original form of the PAQ (Form A) (McCormick, et al., 1972). This study is discussed later, but the basis
for such a use is presented here, specifically the criterion for determining job values.

Criterion for Determining Job Values

A central issue in establishing pay rates for jobs relates to the standard or criterion that should be used in
determining the value of jobs. This is of special concern in the frame of reference of comparable worth or
pay equity as it deals with earnings of men and women.

Although various concepts have been proposed for such criteria, there are serious problems in
crystallizing such criteria and measuring them. In addition, it is argued that there is no conceptually
appropriate, economically viable, or practical basis for determining the value of jobs without considering
what can be viewed as a hierarchy of job values that underlies the wages and salaries paid to all jobs
throughout the entire occupational structure of the economy. This value system is essentially a function of
the supply of and demand for individuals who possess the relevant job skills, who have the ability to
apply the relevant effort, who are capable of assuming the relevant job responsibilities, and who are able
and willing to work under the conditions in question. Such considerations argue for the use of a criterion
of job values that is rooted in this hierarchy. Granting that the going rates of jobs reflected by this
hierarchy do not comprise a perfect criterion, there seems to be no other reasonable and practical
alternative. The going rates of jobs in the labor market, then, have been the criterion of job values used
with the PAQ in what is called job component job evaluation (although other possible criterion, if
available, could be used).

The Initial Studies of PAQ-based Job Evaluation

In one of the initial studies of the PAQ in the job evaluation context, Mecham and McCormick (1969a)
used data for a sample of 340 jobs from 45 varied organizations to identify the relationship between PAQ
dimension scores and going rates of compensation. The sample of jobs included those in most major
occupational categories, although those in the professional, technical, and managerial occupation group,
and in the clerical and sales occupation group tended to be predominant (these including 90 and 78 jobs
respectively). The range of monthly compensation rates for the jobs at that time was from $275 to $1773,
with a standard deviation of $285 per month. The wage and salary data originally reported for these jobs
had been given in various terms, such as earnings per hour, and salaries per week, per month, or per year.
These various types of earning data were converted to the common metric of earnings per month, based
on the assumption of a 40-hour week in the case of hourly-paid jobs.

This total sample was subdivided into sub-samples (sample A with 165 jobs and sample B with 175 jobs).
A double-cross-validation procedure was carried out, utilizing job dimension scores as predictors of the
compensation rates for the jobs. This double-cross-validation procedure involved the use of build-up
step-wise regressions analysis. Scores on the nine job dimensions which contributed most to the multiple
correlation with the criterion in each of the two samples (sub-samples A and B) were then used as the
basis for the derivation of predicted compensation point values for jobs in the other sample (B and A,
respectively); these predicted compensation point values were derived on the basis of the statistically-
obtained weights for the job dimensions in question. The multiple correlation coefficients and cross-
39
validation coefficients of the regression equations are given below, along with the multiple correlation
coefficient for the combined sample.

Initial Sample Cross-Validation Sample Combined Sample
A B B A
.87 .90 .85 .83 .87


A subsequent analysis in 1977 of data for a sample of 850 jobs for which wage and salary rates were
available resulted in a multiple correlation coefficient of about .85 as based on a regression analysis of
this total pool of jobs.

It should be noted that these two analyses were based on jobs from a wide variety of industries in various
parts of the country. Thus, some of the variance in the criterion values of wage and salary rates may be
associated with the industries and geographical locations in question; to the extent that this may be the
case, the correlations are under-estimates of the relationship between the PAQ dimensions and the
compensation data.

Organization Specific PAQ-based Job Evaluation Studies

The initial studies discussed above indicated that PAQ data could be used as the basis for a job evaluation
system. A number of subsequent studies have added further support to this conclusion. It should be noted,
however, that in the later studies Form B of the PAQ was used (Form B varied only slightly from Form
A), and that, in certain studies, different sets of job dimensions were used, as based on data from different
samples of jobs.

Insurance Company. One study involved 79 jobs in a major insurance company which had a formal job
classification and evaluation plan in existence for some time (Taylor, personal communication, 1972). A
regression equation using PAQ job dimension scores was used to derive predicted job evaluation point
values for the jobs in question. The predicted values were then correlated with the actual salary rates, the
resulting correlation coefficient being .93.

Utility Companies. Two studies have been conducted in Southwestern electric utility companies,
whereby data derived from the administration of the PAQ were used to develop job evaluation ranges
(Jeanneret, 1972, 1980). It should be noted that formal job evaluation plans did not exist in either of these
organizations prior to the time of the studies. In one company a study was made of 112 different non-
supervisory jobs which were primarily of a clerical, accounting, or customer service nature.
Approximately 1400 individuals filled these jobs. First, major job classifications were identified on the
basis of similarities among jobs in terms of their job dimension profiles. Then, the predicted job
evaluation points were developed for each job using these points to develop salary grades. These final
grades were found to be quite consistent with local market salaries and closely matched the salary
classifications of other companies that were using other job evaluation procedures.

40
A similar study was conducted in the second utility company, although the parameters were somewhat
smaller; only 52 different non-supervisory jobs of a clerical, accounting, or customer service nature were
studied, these jobs being held by 525 incumbents. The results, however, were quite similar and salary
classifications and ranges were developed that proved to be quite consistent with local market conditions
and other formal evaluation plans existing in other companies.

Public Sector. The PAQ also has been utilized in the study and development of job evaluation and
compensation programs in the public sector, especially in state and local government units. Probably the
first such study was conducted in a medium-size city whereby 19 municipal benchmark jobs were
analyzed with the PAQ (Robinson, Wahlstrom, & Mecham, 1974). In turn, PAQ job evaluation points for
each of the benchmark jobs were compared with the results of four other methods of deriving
compensation rates. The intercorrelation coefficients between the various methods ranged from .82 to .95.
Additionally, 21 cities of similar size supplied salary information on the benchmark jobs for which PAQ
predictions were made. The correlation coefficient between the median salaries of the key jobs and the
predicted PAQ points was .945. Thus, the policy-capturing approach utilized by application of
weighted PAQ job dimensions was found to have evaluated municipal jobs in the same manner and at
the same salary levels as other methods of job evaluation.

The Comparable Worth Issue

In the United States and certain other countries the comparable worth issue (sometimes called pay equity)
has focused attention on the pay for men versus that for women. It is well recognized that in the United
States during the late 1970s and early 1980s that women in general received about 60 to 65 per cent of
the earnings that men received. It has been alleged that this difference is in part the consequence of some
type of discrimination. It is not appropriate here to resolve this issue, but it is appropriate to discuss
certain aspects of it that are relevant to the use of the PAQ for job evaluation purposes.

As an aside, it should be noted that many variables other than discrimination can account for much of the
35% to 40% difference in the overall earnings of men and women. A major part of this difference is
attributable to the nature of the jobs held by men and by women. In addition, some part of it can be
attributed to such variables as differences in experience and job tenure, hours worked, overtime, shift
work, and night work (Mecham, 1984).

The potential relevance of the PAQ to the comparable worth issue lies in the fact that the PAQ provides
for the quantitative measurement of generic job characteristics of a worker-oriented, behavioral nature.
These are reflected in the job dimension scores. Such scores, in turn, can be used to compare
quantitatively the similarities (i.e., the comparability) between and among jobs. Further, there is evidence
that the analysis of jobs with the PAQ is not influenced by the sex of the job incumbent (Arvey, Passino,
& Lounsbury, 1977). Thus the PAQ presumably does not lend itself to discrimination in the actual job
analysis process.

The ability to measure job characteristics, however, does not insure the relevance of that which is
measured (i.e., the human behaviors underlying the job dimensions). The relevance of the PAQ for this
purpose must lie in its use as a predictor of the criterion of going rates of pay as discussed before. This
prediction would be rooted in the hypothesis that various job components (such as the basic human
behaviors measured by the PAQ job dimensions) underlie the hierarchy of job values of going rates of
pay.

41
This hypothesis is generally confirmed by the results of a study by Mecham (1986) in which PAQ job
dimension scores were correlated with average earnings of people in 397 jobs as reported for the 1980
census of the United States (Bureau of the Census, 1984). Correlations between the PAQ job dimension
scores for the jobs were computed separately with the earnings of men and of women in the jobs. Separate
regression equations were derived for men and for women consisting of the statistically-weighted job
dimensions that best predicted average earnings. Male and female equations included essentially the same
job dimensions, but with somewhat different statistical weights. The correlation of the predicted earnings
across 397 jobs as based on male and female equations were of the order of .96 and .97 (depending on
differences in the weighting methods used with the models and the sex-ratio of males and females on the
jobs that were used). In turn, the correlations of predicted earnings (based on these equations) with actual
earnings ranged from .72 to .87.

Thus, it seems that job characteristics as measured by the PAQ job dimensions are indeed strongly
correlated with going rates of pay in the labor market, and additionally it appears that there are no
substantial differences in the combinations of job dimensions that best predict earnings of men versus
women. (Across the spectrum of work, female dominated occupations were found to be associated with
considerably lower earnings, however, for both women and men who worked on such jobs.)

In the actual use of the PAQ as the basis for comparing the earnings of men and women, Jeanneret (1972)
derived predicted earnings of men and women on comparable jobs in the two utility companies referred
to above and in a savings and loan association. (The comparability was based on the similarity of PAQ
job dimension scores.) In these organizations he found moderate systematic differences in the actual pay
of men and women in comparable jobs, with men having higher earnings. (In these companies the
salaries of women were subsequently adjusted upward.)

A more recent study conducted by the Office of Women and Work, Michigan Department of Labor, State
of Michigan (1980), focused on comparing job evaluation results especially in terms of their fairness to
female dominated jobs. A total of 207 jobs were selected for study. Sixty-nine were male dominated,
84 were female dominated, and the remaining 54 were gender neutral. Every job was analyzed and
evaluated using the PAQ and a point factor plan adapted from the Factor Evaluation System of the
Federal government. The point factor ratings were completed by a committee selected especially for their
lack of bias. The average reliability of the PAQ data was .80 and .81 for the point factor plan ratings.
Multiple regression analyses for both evaluation methods yielded coefficients in the .90s. However, it
was concluded that the PAQ had some advantages in that it provided more detailed information on
independent dimensions of job content than the point factor plan. The analyses indicated that the PAQ
revealed a $115 per month male-female pay differential for jobs that had the same point values, while the
point factor plan found a $194 per month difference.

Apart from the issue of fairness, it is also important to consider how the PAQ conforms to the Equal Pay
Act which is the overriding statute that regulates any job evaluation, classification and compensation
system in the U.S. In essence, the Equal Pay Act requires that jobs be evaluated on the basis of skill,
effort, responsibility, and working conditions. Jeanneret (1980) made an initial categorization of the PAQ
items and dimensions in terms of the four Equal Pay Act categories. A more definitive classification of
the PAQ items and dimensions in terms of skill, effort, responsibility and working conditions is presented
in Table 12. Definitions used for these four categories are consistent with the guidelines provided in the
Equal Pay Act itself, as well as the definitions provided by the American Compensation Association and
the American Society for Personnel Administration (1988). Clearly, the PAQ encompasses each of the
four categories in terms of both specific PAQ items as well as the divisional and overall job dimensions.

42
TABLE 12

Categorization of PAQ Job Dimensions
By Skill, Effort, Responsibility, and Working Conditions



SKILL

1
2
4
6
*7
*8
*9
12
*13
*33
*34
Interpreting What is Sensed
Using Various Sources of Information
Evaluating/Judging What is Sensed
Using Various Senses
Decision-Making
Processing Information
Using Machines/Tools/Equipment
Performing Skilled/Technical Activities
Performing Controlled Manual/Related Activities
Having Decision, Communicating, and General Responsibilities
Operating Machines/Equipment



EFFORT

3
*7
*8
*9
10
*13
14
15
16
23
30
*31
*33
39
41
Watching Devices/Materials for Information
Decision-Making
Processing Information
Using Machines/Tools/Equipment
Controlling Machines/Processes
Performing Controlled Manual/Related Activities
Using Miscellaneous Equipment/Devices
Performing Handling/Related Manual Activities
General Physical Coordination
Engaging in Personally Demanding Situations
Working Under Job-Demanding Circumstances
Performing Unstructured vs. Structured Work
Having Decision, Communicating, and General Responsibilities
Performing Routine/Repetitive Activities
Engaging in Physical Activities

43
TABLE 12 (continued)




RESPONSIBILITY

17
18
19
20
21
*31
*33
37
42
43
Communicating Judgments/Related Information
Engaging in Personal Contacts
Performing Supervisory/Coordination/Related Activities
Exchanging Job-Related Information
Public/Related Personal Contacts
Performing Unstructured vs. Structured Work
Having Decision, Communicating, and General Responsibilities
Performing Service/Related Activities
Supervising/Directing/Estimating
Public/Customer/Related Contacts



WORKING CONDITIONS

5
22
24
25
26
32
40
44
Being Aware of Environmental Conditions
Being in a Stressful/Unpleasant Environment
Being in Hazardous Job Situations
Working Non-Typical vs. Day Schedule
Working in Businesslike Situations
Being Alert to Changing Conditions
Being Aware of Work Environment
Working in an Unpleasant/Hazardous/Demanding Environment


Those dimensions with an * can be classified in more than one category given the ACA definition of
skill, effort and responsibility and the content of the PAQ dimensions.

The numbers 1 to 44 are used to label the PAQ dimensions. Dimensions 1 to 32 are divisional dimensions
(derived only on the basis of the items within their respective divisions, while Dimensions 33 to 44 are
overall dimensions (based on all items considered simultaneously).



Source: Jeanneret, 1988

44
Although the derivation of compensation indexes with the PAQ has typically been based on the
generalized equation that was developed during the study of jobs in various organizations, it is possible
in some instances to derive unique equations that reflect the compensation policy of individual
organizations, or the going rates within specified labor markets. (These are options that are available to
organizations interested in such an approach. Such approaches would be particularly relevant in the case
of organizations with a fairly large number of jobs.) However, the generalized equation has been found to
be extremely stable over time, indicating that while the technology of jobs has changed, the values of
basic work behaviors measured by the PAQ have remained consistent.

The PAQ approach to job evaluation, as used in a variety of industries and locations, has often replaced
more conventional job evaluation approaches because of the greater efficiency and objectivity inherent in
the technique. Specific methods for using the technique are outlined in the PAQ Users Manual.

45
PREDICTION OF EXEMPT STATUS UNDER THE U.S. FAIR LABOR STANDARDS ACT


Jobs of an executive, administrative or professional nature are exempt from the overtime pay and certain
other provisions of the U.S. Fair Labor Standards Act. The regulations used to classify a job into one of
these categories are subject to varying interpretations and the lines of demarcation may be blurred by the
introduction of new technology and changes in the structure of work.

A study to determine the utility of PAQ data in predicting whether a job would be classified as exempt or
non-exempt was undertaken in 1982 (Mecham & Wilcox). The study involved 3,090 jobs which had been
analyzed with the PAQ and for which users responded to a survey question asking how the job had been
classified. Of the 3,090 jobs, 162 had been classified as exempt and the remaining 2928 had been
classified as non-exempt.

Multiple discriminant analysis was used to identify PAQ items which were useful in differentiating
between jobs classified as either exempt or non-exempt. In addition, the researchers eliminated from
consideration those items which did not appear to logically relate to some aspect of the statute or
associated regulations. The result was a set of 32 PAQ items with weights that could be used to estimate
the probability that a job would be classified as exempt or non-exempt.

Using the items and weights resulted in correctly predicting the status of the jobs in the sample as
follows: exempt, 80.9%; non-exempt, 98.5%; all jobs, 97.5% correctly classified.

Several scoring options produce an estimate of the probability that a job would have been classified as
exempt. When the probability estimate is clearly different from 0 or 1.0, some of the item ratings
provided for the job have typically been associated with an exempt classification and some with a non-
exempt classification. In such cases, it will likely be necessary to research the appropriate regulations and
case law to make a final and defensible classification.

46
DEVELOPMENT OF JOB FAMILIES WITH THE PAQ


One of the most important products that can come from the PAQ analyses of various jobs within an
organization is that of providing data for understanding the relationships between and among these jobs.
The nature of these relationships can be described in several ways, but perhaps the most useful procedure
is that of grouping jobs together into basic job families or clusters in terms of their similarities as based on
PAQ data. These job families make it possible to identify the pattern of work functions that comprise each
such job family. In many instances these basic job families actually cut across departmental or divisional
lines within an organization, and often the basic families are obscured by a morass of job titles.
Indications from previous applied research, however, point out that through the analysis of data developed
with the PAQ, order can be achieved in a relatively simple fashion (see for example Taylor & Colbert,
l978a and Taylor & Colbert, 1978b).

The statistical procedure utilized to identify the basic job families is one of profile comparison or pattern
analysis of statistically-derived job dimension scores for individual jobs on each of the 13 overall job
dimensions or the divisional dimensions. These dimension scores represent profiles for individual jobs.
Each such job profile can be statistically compared with every other job profile under study, as well as
with a certain set of standard profiles in the data bank. This procedure sometimes results in the grouping
of jobs into job families on the basis of reasonably similar profiles, but whose job titles would not have
suggested such groupings. In turn, such data can be quite useful in consolidating or systematizing job
titles as deemed appropriate. Of course, the converse argument is also true, in that a profile comparison of
jobs that are assumed to be reasonably similar because of job titles or otherwise, might in fact have
somewhat different job behavior profiles. Again, appropriate action can be taken, such as altering job
titles and job classifications accordingly. Other useful purposes can include personnel selection, career
pathing and counseling, performance appraisal, job design, and test validation for job families. (See
McCormick & Jeanneret, 1988).

In connection with the nature of job families, it should be kept in mind that those based on the PAQ
represent clusters of jobs that have somewhat similar profiles of job dimensions that represent generic
human behaviors; this is because the PAQ job elements are of a worker-oriented nature, each one
characterizing some specific type of generic human behavior. Because of this, the workers on jobs within
a given job family are not necessarily transferable in the terms of the technological aspects of their jobs.
Thus, although the butcher, the baker, and candlestick maker might have somewhat similar profiles of
PAQ job dimensions (and thus be in the same job family) this does not necessarily mean that their
specific job skills and knowledges are transferable.

47
THE PAQ IN PERFORMANCE APPRAISAL


There are at least two ways in which the PAQ can be useful in performance appraisal.

The first of these is that of characterizing the work behaviors for which performance appraisals should be
obtained, thus helping to make such appraisals job-related. In this regard there are some indications that it
may be preferable to provide for such appraisals to be made on the basis of specific aspects of work
behavior rather than on some global concept of overall job performance. Some support for this position
comes from a study by Vineberg and Taylor (1976) in which they arranged for the evaluation of
performance of Naval personnel on the basis of each of the following aspects of job content: (1) overall
job performance; (2) worker-oriented job elements (based on a modified form of the PAQ); (3) tasks (as
included in an appropriate task inventory). In general they found the performance ratings based on PAQ
job elements or tasks to be more satisfactory than those based on overall job performance.

In this type of use of the PAQ, performance appraisal can also be at the level of job dimensions. Such a
procedure was used by Dickinson (1977) in the rating of the performance of sales representatives of a
wholesale food company. In this case the primary job dimensions (those with the highest scores) were
first identified, and then provision was made for the appraisal of individuals on these job dimensions.
Once the major dimensions are identified (as in this study) any of various appraisal procedures can be
used.
1
(In this study by Dickinson a behaviorally anchored rating scale was developed for each of the
primary job dimensions).

A second use of the PAQ in the performance appraisal frame of reference is that of grouping jobs into job
families with subsequent provision for developing a separate appraisal form for each family. Such a
procedure was followed by Cornelius, Hakel and Sackett (1979) in their analysis of jobs of 2035 enlisted
personnel in the U.S. Coast Guard. Using a special form of factor analysis they first identified five groups
(i.e., families) of related jobs, and then developed a special rating form for each group that provided for
rating incumbents on the job components that were most important for the group.

Procedures such as those discussed above can contribute to the job relatedness of performance appraisals.













1
For a discussion of rating methods the reader is referred to any of various texts, such as Industrial and
Organizational Psychology by E. J. McCormick and D. R. Ilgen, Prentice-Hall, 1985, Chapter 6.

48
THE PAQ AND JOB PRESTIGE


In the United States and certain other countries some jobs are generally perceived as being more
desirable, that is, as having more prestige than other jobs.

Since the PAQ provides for measuring various job characteristics it would be reasonable to hypothesize
that certain such characteristics might be related to the prestige values of jobs and therefore might be used
to predict such values. Such information might then be useful for such purposes as designing career
ladders, for career planning purposes, and for designing jobs to optimize their desirability (i.e., to enrich
jobs).

To explore this possible relationship the PAQ data bank was searched for job classifications
(classifications of the Dictionary of Occupational Titles (DOT) that closely matched prestige scores
reported by Treiman (1977). Thirty-seven job classifications were identified.

PAQ overall job dimension scores were averaged for all jobs within each of the classifications, and a
multiple regression equation was derived to predict the prestige scale values for the jobs. The multiple
correlation (specifically the adjusted R
2
) was .91. The job dimensions most strongly associated with high
prestige scale values were: 33 (Having decision, communication and general responsibilities) and 36
(Performing technical/related activities). [The equation based on this study is used to derive a Job Prestige
Score (JPS) when PAQ data is scored.]

Although PAQ-based Job Prestige Scores might be used in job design (to try to enrich jobs), a word of
caution is in order. On the basis of a follow-up simulation analysis to try to re-design jobs, it was found
that such efforts tended to enhance not only the prestige level of jobs, but also to increase the estimated
ability requirements of jobs as measured by the tests of the General Aptitude Test Battery (GATB).

49
COMPARATIVE EVALUATION OF VARIOUS JOB ANALYSIS METHODS

No single job analysis method can serve all purposes for which job analysis data are used. In this regard
there have been a few instances in which reasonably objective comparisons have been made of such
various methods. One comparative evaluation that has been made was carried out by Brumback,
Romashko, Hahn, & Fleishman (1974) for the New York City Department of Personnel. The most
comprehensive study was reported by Levine, Ash, Hall and Sistrunk (1983). In this latter study, 93 job
analysts working in both private and public sector organizations evaluated seven different job analysis
methods. Each method was evaluated in terms of eleven organizational purposes and eleven practical
considerations. The findings are summarized in Table 13, as reported in McCormick and Jeanneret
(1988). These findings are consistent with the earlier results reported by Brumbach, et al. (1974). The x
under a given method reported in Table 13 indicates that the method was rated high by the panel of job
analysis experts. As indicated, the PAQ was rated high on 18 of the 22 purposes and practical concerns,
and received more high ratings than any other method. Further, the PAQ met every practical concern, thus
highlighting its usefulness in virtually any organization.
50
TABLE 13

Evaluation of Seven Job Analysis Methods by Experienced Job Analysts

Methods with Highest Effectiveness Ratings
(Identified by x)

PAQ Task
Inventry
CODAP
Functnl
Job
Analysis
Job
Elements
Method
Threshold
Traits
Analysis
Ability
Rqmnts
Scales
Critical
Incident
Technique
Organizational Purpose
Job Description x x
Job Classification x x x
Job Evaluation x x x
Job Design x x
Personnel Requirements Specifications x x x x
Performance Appraisal x x x
Worker Training x x x x
Worker Mobility x x x
Efficiency Safety x x x x
Manpower Work Force Planning x x x
Legal Quasi-Legal Requirements x x x

Practical Concerns
Occupational Versatility Suitability x x x x x x x
Standardization x x
Respondent User Acceptability x x x x x
Amount of Job Analyst Training
Required x x x x x
Operational x x x
Sample Size x x x
Off the Shelf x
Reliability x x
Cost x x x
Quality of Outcome x x x
Time to Completion
x x x
Source: Levine, et al. (1983) as adapted by McCormick & Jeanneret (1988)
51
SUMMARY


The PAQ is the result of many years of research directed toward the development of a structured job
analysis questionnaire that would consist of job elements that describe or imply the generic types of
human behaviors (i.e., the worker-oriented behaviors) involved in work. Research and experience with
the PAQ generally support the following conclusions: (1) that the job elements of the PAQ represent
quite comprehensively the domain of human behaviors involved in work activities; and (2) that the
analysis of jobs with the PAQ response scales typically results in reasonably acceptable to high reliability
in the measurement of such job elements; and (3) that job dimensions derived by statistical factor
analyses of PAQ job elements represent very stable dimensions of the human behaviors involved in work
and thus can serve as the basis for characterizing the structure of human work.

It was postulated that the PAQ job dimensions could serve as the common denominators for use in
comparing the similarities and differences between and among jobs, and that to the extent that jobs have
similar job dimension scores, they also would have similar job and performance requirements. Thus, data
based on such job dimensions then could serve a variety of uses in an integrated human resources
program:

* Establishing the aptitude requirements of jobs

* Serving as the basis for performance appraisal

* Establishing pay grades for jobs and setting compensation rates

* Developing job families


The professional recognition of the PAQ and its extensive use in various types of organizations support
these expectations. The PAQ was cited as one of the 15 milestones of psychological research in personnel
selection and classification that had occurred over the years (Dunnette and Borman, 1979) and has been
referred to in much of the literature published on job analysis since its inception. The extent of its use for
operational and research purposes is reflected by the fact that the PAQ has been used in virtually every
type of organization, in virtually every type of job, in many of the countries of the world, and its
continuously up-dated data bank includes analyses of more than 150,000 jobs from over 800
organizations.

52
REFERENCES


American Compensation Association and the American Society for Personnel Administration. (1988).
Elements of sound base pay administration (2nd ed.). Scottsdale, AZ: Author.

Arvey, R. D., Passino, E. M., & Lounsbury, J. W. (1977). Job analysis results as influenced by sex of
incumbent and sex of analyst. Journal of Applied Psychology, 62, 411416.

Brumback, G. B., Romashko, T., Hahn, C. P., & Fleishman, E. A. (1974). Model procedures for job
analysis, test development, and validation. (Final Report, contract No. IPA 725, American
Institute for Research). Washington DC: American Institute for Research.

Bureau of the Census. (1984). 1980 census of population. Subject report: Earnings by occupation and
education (PC80-2-8B). Washington, DC: U.S. Government Printing Office.

Carter, R. C., & Biersner, R. J. (1987). Job requirements derived from the Position Analysis
Questionnaire and validated using military aptitude test scores. Journal of Occupational
Psychology. 60, 311321.

Cornelius, E. T., Hakel, M. D., & Sackett, P. R. (1979). A methodological approach to job classification
for performance appraisal purposes. Personnel Psychology, 12, 283287.

Dickenson, A. M. (1977). Development of a systematic procedure for the evaluation of employees: Job
performance based on a standard job analysis questionnaire. Masters thesis. Fairleigh Dickenson
University, Madison, NJ.

Dunnette, M. D., & Borman, W. C. (1979). Personnel selection and classification systems. In J.
Rosensweig & L. Porter (Eds.), Annual Review of Psychology. Palo Alto, CA: Annual Reviews,
Inc.

E. F. Wonderlic and Associates, Inc. (1983). Wonderlic personnel test manual. Northfield, Illinois:
Author.

Gutenberg, R. L., Arvey, R. D., Osborn, H. G., & Jeanneret, P. R. (1983). Moderating effects of decision-
making/information-processing job dimensions on test validities. Journal of Applied Psychology,
68(4), 602608.

Harrell, T. W., & Harrell, M. S. (1945). Army general Classification Test scores for civilian occupations.
Educational and Psychological Measurement, 5, 229239.

Hartigan, J.A., & Wigdor, A. K. (Eds.). Fairness in employment testing: Validity Generalization,
Minority issues, and the General Aptitude Test Battery. Washington DC: National Academy Press.

Harvey, R. J. & Hayes, T. L. (1986). Monte carlo baselines for interrater reliability correlations using the
Position Analysis Questionnaire. Personnel Psychology, 39, 345357.

Jeanneret, P. R. (1972). [PAQ job evaluation studies]. Unpublished raw data.

Jeanneret, P.R. (1980). Equitable job evaluation and classification with the Position Analysis
Questionnaire. Compensation Review, 12(1), 3242.
53
Jeanneret, P.R. (1987). [Factor Analysis of PAQ data]. Unpublished raw data.

Jeanneret, P. R. (1988a). Computer logic chip production operators. In Gael, S. (Ed.), The job analysis
handbook for business, industry and government. New York: John Wiley & Sons, 13291345.

Jeanneret, P. R. (1988b). Position requirements for space station personnel and linkages to portable
microcomputer performance assessment. Prepared under NASA contract No. NAS9-l7326;
Houston, Texas: Author.

Levine, E. L., Ash, R. A., Hall, H. L., & Sistrunk, F. (1983). Evaluation of job analysis methods by
experienced job analysts. Academy of Management Journal, 26, 339348.

Lawshe, C. H., Jr., & Balma, M. J. (1966). Principles of personnel testing. New York: McGraw-Hill.

Marquardt, L. D., & McCormick, E. J. (1972). Attribute ratings and profiles of the job elements of the
Position Analysis Questionnaire (PAQ). Department of Psychological Sciences, Purdue University,
Report No. 1, June 1972 (under Contract No. N00014-67-A-0226-0016) (AD-746 476).

McCormick, E. J. (1959). Application of job analysis to indirect validity. Personnel Psychology, 12, 402
413.

McCormick, E. J. (1979). Job Analysis: Methods and Applications. AMACOM: New York.

McCormick, E. J., & Ilgen, D. R. (1985). Industrial and organizational psychology (8th ed.) Englewood
Cliffs, N.J.: Prentice-Hall.

McCormick, E. J., DeNisi, A. S., & Shaw, J. B. (1979). Use of the Position Analysis Questionnaire for
establishing the job component validity of tests. Journal of Applied Psychology, 64, 5156.

McCormick, E. J., Gordon, G. G., Cunningham, J. W., & Peters, D. L. (1962). The worker activity profile.
Lafayette, Indiana: Occupational Research Center, Purdue University.

McCormick, E. J., & Jeanneret, P. R. (1988). Position Analysis Questionnaire (PAQ). In Gael, S. (Ed.).
The Job Analysis Handbook for Business, Industry and Government. Vol. II. New York: John
Wiley & Sons, Inc. (825842).

McCormick, E. J., Jeanneret, P. R., & Mecham, R. C. (1972). A study of job characteristics and job
dimensions as based on the Position Analysis Questionnaire (PAQ). Journal of Applied
Psychology. 56, 347368.

McCormick, E. J., Jeanneret, P. R., & Mecham, R. C. (1967). Position Analysis Questionnaire (Form A).
West Lafayette, Indiana: Occupational Research Center, Department of Psychology, Purdue
University.

McCormick, E. J., Jeanneret, P. R., & Mecham, R. C. (1969). Position Analysis Questionnaire (Form B).
West Lafayette, Indiana: Occupational Research Center, Department of Psychology, Purdue
University.

McCormick, E. J., Jeanneret, P. R., & Mecham, R. C. (1989). Position Analysis Questionnaire (Form C).
Palo Alto: Consulting Psychologists Press.
54
McCormick, E. J., Mecham, R. C., & Jeanneret, P. R. (1977). Position Analysis Questionnaire technical
manual (System II). Logan, Utah: PAQ Services Inc.

Mecham, R. C. (Ed.). (1984, September). PAQ Newsletter. (Available from PAQ Services, Inc. 1625 N.
1000 E. Logan, UT).

Mecham, R. C. (1985, August). Comparative effectiveness of situational, generalized and job component
validation methods. Paper presented at the meeting of the American Psychological Association,
Los Angeles.

Mecham, R. C. (1986, August). Job evaluation: Pay equity problem or solution? Paper presented at the
meeting of the American Psychological Association, Washington, DC.

Mecham, R. C. (1987). [PAQ job component validity study with the General Aptitude Test Battery].
Unpublished raw data.

Mecham, R. C. (l988a). [PAQ dimension score reliability study). Unpublished raw data.

Mecham, R. C. (1988b). [Equivalence of 1977 and 1987 PAQ equations for predicting GATB data].
Unpublished raw data.

Mecham, R. C. (1988c). [Correlation of Differential Aptitude Tests with selected tests of the General
Aptitude Test Battery]. Unpublished raw data.

Mecham, R. C. (1988d). [PAQ job component validity study with the Wonderlic Personnel Test].
Unpublished raw data.

Mecham, R. C. (1988e). [PAQ job component validity study with the Myers-Briggs Type Indicator].
Unpublished raw data.

Mecham, R. C. (1989a). [PAQ inter-analyst item reliability studies]. Unpublished raw data.

Mecham, R. C. (1989b). [Correlation of attribute scores with General Aptitude Test Battery, Wonderlic
and Myers-Briggs Type Indicator data]. Unpublished raw data.

Mecham, R. C., & McCormick, E. J. (1969a) The use in job evaluation of job elements and job
dimensions based on the Position Analysis Questionnaire. Occupational Research Center, Purdue
University, Report No. 3, (under contract Nonr-1100(28)) (AD-691 734).

Mecham, R. C., & McCormick, E. J. (1969b). The use of data based on the Position Analysis
Questionnaire in developing synthetically-derived attribute requirements of jobs. Occupational
Research Center, Purdue University, Report No. 4, (under contract Nonr-1100(28)) (AD-691 735).

Mecham, R. C., McCormick, E. J., & Jeanneret, P. R. (1984). Job component validity based on the PAQ:
Its conceptual and empirical basis as contrasted with that of situational validation and validity
generalization. Unpublished manuscript.

Mecham, R. C., McCormick, E. J., & Jeanneret, P. R. (1977). Users manual for the Position Analysis
Questionnaire (System II). Logan, Utah: PAQ Services, Inc.
55
Mecham, R. C., & Wilcox, A. C. (1982). Predicting exempt or non-exempt status under the Fair Labor
Standards Act using the Position Analysis Questionnaire (PAQ): A discriminant analysis
approach. Unpublished manuscript.

Myers, I. B., & McCaulley, M. H. (1985). Manual: A guide to the development and use of the Myers-
Briggs Type Indicator. Palo Alto: Consulting Psychologists Press.

Nunnally, J. C. (1978). Psychometric theory. (2nd ed.). New York: McGraw-Hill.

Office of Women and Work, Michigan Department of Labor. State of Michigan (1980). A comparable
worth study of the state of Michigan job classifications. Lansing, MI: Author.

Palmer, G. J. (1958). An analysis of job activities. Information-receiving, mental and work performance.
Unpublished doctoral dissertation. Purdue University.

Robinson, D. D., Wahlstrom, O. W. & Mecham, R. C. (1974). Comparison of job evaluation methods: a
policy-capturing approach using the Position Analysis Questionnaire. Journal of Applied
Psychology, 59, 633637.

Schaie, K. W. (1958). Occupational level and the primary mental abilities. Journal of Educational
Psychology, 49, 299303.

Schmidt, F. L., & Hunter, J.E. (1977). Development of a general solution to the problem of validity
generalization. Journal of Applied Psychology, 62, 529540.

Shaw, J. B., & McCormick, E. J. (1976). The prediction of job ability requirements using attribute data
based upon the Position Analysis Questionnaire (PAQ). Department of Psychological Sciences,
Purdue University, Report No. 1, October 1976 (under Contract No. N00014-76-C-0274) (AD-
A035 623/8GI).

Society for Industrial and Organizational Psychology, Inc. (1987). Principles for the validation and use of
personnel selection procedures (3rd ed.). College Park, MD: Author.

Stewart, N. (1947). AGCT scores of Army personnel grouped by occupation. Occupations, 26, 541.

Strong, E. K., Jr. (1943). Vocational interests of men and women. Stanford: Stanford University Press.

Taylor, L. R., & Colbert, G. A. (1978a). Empirically derived job families as a foundation for the study of
behavior in organizations: Study I. The construction of the job families based on the component
and overall dimensions of the PAQ. Personnel Psychology. 31, 325340.

Taylor, L. R., & Colbert, G. A. (1978b). Empirically derived job families as a foundation for the study of
behavior in organizations: Study II. The construction of job families based on company-specific
PAQ job dimensions. Personnel Psychology. 31, 341353.

Thorndike, R. L., & Hagen, E. (1959). Ten thousand careers. New York: Wiley.

Treiman, D. J. (1977). Occupational prestige in comparative perspective. New York: Academic Press.

Tyler, L. E. (1965). The psychology of human differences (3rd ed.). New York: Appleton-Century-Crofts.
56
U.S. Department of Labor. (1970). Manual for the USTES General Aptitude Test Battery (Section III:
Development). Washington, DC: U.S. Government Printing Office.

U.S. Department of Labor. (1977). Dictionary of Occupational Titles (4th ed.), Washington, DC: U.S.
Government Printing Office.

U.S. Department of Labor. (1983). Test validation for 12,000 jobs: An application of job classification
and validity generalization analysis to the General Aptitude Test Battery (USES Test Research
Report No. 45). Washington, D.C.: Division of Counseling and Test Development.

Vineberg, R., & Taylor, E. N. (1976). Performance of men in different mental categories: 1. Development
of worker-oriented and job-oriented rating instruments in Navy jobs. Alexandria, VA: Human
Resources Research Organization.

Wonderlic, C. F., Long, E. R., & Shaeffer, J. M. (1984). Employee turnover: a summary research report.
Northfield, IL: E. F. Wonderlic Personnel Test, Inc.

Yerkes, R. M. (Ed.). (1921). Psychological examining in the U.S. Army. Memoirs of National Academy
of Science, 15.

57
APPENDIX



This appendix includes sources of tests for which PAQ predictions are made and the scores on the various
tests used in the McCormick, et al. (1979) study as converted to the standard score metric. These scores
are based on a mean of 100 and a standard deviation of 20, which is the same standard score system used
with the GATB tests. The various tests used and their publishers (or other contacts) are listed below.



Test of Learning Ability (Learn.A.) Arithmetic
Fundamentals Test (Arith.F)

Richardson, Bellows, Henry & Co.
2700 South Quincy Street, Suite 310
Arlington, VA 22206
(703) 9984800
General Aptitude Test Battery
(Available to Canadian users)

Institut de Recherches Psychologiques
334 ouest, rue Fleury
Montreal, Quebec H3L 1S9
Canada
Wonderlic Personnel Tests

Wonderlic Personnel Tests
1509 North Milwaukee Avenue
Libertyville, IL 60048-1387
(847) 6804900
Myers-Briggs Type Indicator

Consulting Psychologists Press
577 College Avenue
Palo Alto, CA 94306
18006241765
Employee Aptitude Survey
- Verbal (EAS- V)
- Numerical (EAS-N)
- Spatial (EAS-S)
- Visual Speed and Accuracy (EAS-VSA)

Psychological Services, Inc.
100 West Broadway, Suite 1100
Glendale, California 91210
18003671565 or (818) 2440033
Adaptability Test (Adapt)
Arithmetic Index (Arith I.)
Flanagan Industrial Tests
-Arithmetic (FIT-AR)
- Assembly (FIT-A)

Sumas Testing Co.
603 Cherry Street
Sumas, WA 98295
Toll Free: 18888188378
Personnel Tests for Industry - Verbal (PTI- V) - Numerical (PTI-N)
Short Employment Tests - Verbal (SET-V) - Numerical (SET-N) - Clerical (SET-C)
Revised Minnesota Paper Form Board (MPFB)
Minnesota Clerical Test (Names only) (Minn.)

The Psychological Corporation
555 Academic Court
San Antonio, TX 78204-2498
(210) 2991061

58
TABLE 14

Conversion of Scores of Tests Used in Study to Standard Scores*


General
Intelligence
Verbal
Aptitude
Numerical
Aptitude
Spatial
Aptitude
Clerical
Perception

W
O
N
D
E
R
L
I
C

A
D
A
P
T
.

L
E
A
R
N
.

A
.

P
T
I


V

S
E
T


V

E
A
S


V

P
T
I


N

S
E
T


N

E
A
S


N

A
R
I
T
H
.

I
.


F
I
T


A
R

A
R
I
T
H
.

F
.

M
P
F
B

E
A
S


S

F
I
T


A

E
A
S


V
S
A

S
E
T


C

M
I
N
N
.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
130 34 27 50 42 47 24 26 54 55 42 59 50 111 42 165
129 26 53 45 41 110 41 162
128 33 49 41 52 44 54 41 15 109 159
127 25 23 25 51 40 58 49 108 40 155
126 32 24 48 40 46 50 43 53 107 151
125 49 42 39 57 48 106 150
124 31 45 22 24 52 105 39 149
123 23 47 48 41 38 40 47 14 104 147
122 39 44 47 40 56 103 38 146
121 30 46 21 23 46 51 46 102 37 144
120 22 38 43 45 39 37 101 143
119 45 37 44 38 50 55 45 100 36 141
118 29 42 20 22 43 36 99 140
117 21 41 37 49 39 44 13 98 138
116 28 44 36 42 36 38 54 97 35 137
115 35 40 19 21 48 35 37 53 43 96 136
114 43 39 41 35 95 34 134
113 27 34 38 20 40 34 47 34 52 42 94 133
112 42 33 37 18 39 36 51 41 93 33 131
111 26 20 36 19 38 33 46 33 12 92 129
110 41 32 35 37 32 40 92 32 127
109 31 34 17 18 36 45 50 91 125
108 25 19 33 31 32 34,35 39 90 123
107 40 30 32 16 35 30 44 49 89 31 122
106 24 31 17 34 31 38 11 88 120
105 18 38 29 30 32,33 29 43 48 87 30 119
104 28 29 15 16 28 30 33 37 86 118
103 23 38 28 42 85 116
102 27 27 31 27 47 36 84 29 115
101 22 26 26 14 15 30 26 29 46 10 83 113
100 17 37 25 25 29 41 32 35 82 28 112
99 24 24 28 25 40 28 45 81 111
98 21 16 36 23 23 13 14 27 24 34 80 27 109
97 22 27 79 26 108
96 20 35 22 13 23 39 31 44 33 78 106
95 15 21 21 12 26 22 9 77 104

59
TABLE 14 (continued)

Conversion of Scores of Tests Used in Study to Standard Scores*


General
Intelligence
Verbal
Aptitude
Numerical
Aptitude
Spatial
Aptitude
Clerical
Perception

W
O
N
D
E
R
L
I
C

A
D
A
P
T
.

L
E
A
R
N
.

A
.

P
T
I


V

S
E
T


V

E
A
S


V

P
T
I


N

S
E
T


N

E
A
S


N

A
R
I
T
H
.

I
.


F
I
T


A
R

A
R
I
T
H
.

F
.

M
P
F
B

E
A
S


S

F
I
T


A

E
A
S


V
S
A

S
E
T


C

M
I
N
N
.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
94 34 20 12 25 38 26 32 76 103
93 19 19 21 75 101
92 14 18 20 11 24 20 37 25 30,29 43 31 74 25 99
91 18 33 17 19 11 23 42 30 73 98
90 18 19 36 24 41 8 72 24 97
89 13 32 16 17 10 22 18 29 71 96
88 17 10 35 28 70 95
87 31 15 16 21 17 23 40 28 69 23 94
86 16 12 14 9 20 16 34 68 92
85 30 13 15 9 22 27 39 27 67 90
84 12 14 19 15 33 26 7 66 22 88
83 15 11 8 8 14 21 25 26 65 87
82 29 11 18 38 64 86
81 14 10 13 17 13 32 37 25 63 84
80 10 28 9 7 12 20 62 21 83
79 12 7 16 11 31 36 24 6 61 82
78 13 27 8 60 80
77 9 11 6 15 10 30 19 24 35 23 59 20 79
76 12 26 6 58 78
75 26 7 10 29 18 34 22 57 19 77
74 8 5 5 14 9 33 5 56 76
73 11 25 13 8 28 17 21 55 74
72 6 9 4 32 54 18 72
71 10 7 24 12 7 27 16 23 31 20 53 69
70 4 6 52 67
69 23 5 3 11 26 30 19 51 17 66
68 9 8 10 15 4 50 65
67 6 3 5 25 22 29 18 49 64
66 8 22 4 2 14 21 28 48 16 63
65 7 9 4 24 17 47 62

60
INDEX


Adaptability Test (57)
American Compensation Association (41)
American Society for Personnel
Administration (41)
Answer sheet (1)
Aptitude (20), (21), (22), (23), (32), (33), (51)
Aptitude band (32)
Aptitude requirements (22), (51)
Aptitude tests (29)
Arithmetic Index (57)
Arithmetic Fundamentals Test (57)
Armed Services Vocational Aptitude Battery
(ASVAB) (34)
Arvey, R.D. (31), (40), (52)
Ash, R.A. (49), (53)
Attribute(s) (17), (18), (19), (20), (25), (30),
(31), (34)
Attribute band (19), (29)
Attribute data (31), (33)
Attribute percentiles (31), (33)
Attribute ratings (30), (31)
Attribute relevance ratings (30)
Attribute requirements (18), (19), (32),
(34)
Attribute scores (31), (34)
Balma, M.J. (20), (53)
Biersner, R.J. (34), (52)
Borman, W.C. (51), (52)
Brumback, G.B. (49), (52)
Bureau of the Census (41), (52)
Career
Career ladder (48)
Career pathing and counseling (46)
Career planning (48)
Carter, R.C. (34), (52)
Checklist of Work Activities (4)
Cognitive tests (18), (22), (24), (28), (29), (33)
Colbert, G.A. (5), (6), (46), (55)
Commercially available tests (25), (26), (57)
Common denominators (51)
Comparable worth (38), (40)
Comparative evaluation (49)
Compensation rates (38), (40), (51)
Complexity (18), (20), (28)
Cornelius, E.T. (47), (52)
Cunningham, J.W. (4), (53)
Cutoff scores (23), (32)
Data bank (21), (23), (43), (45), (47)
Decision-making (28)
DeNisi, A.S. (25), (53)
Development of the PAQ (1), (4), (11), (17), (18),
(34), (37), (43), (47)
Dexterity tests (19), (22), (27), (28), (33)
Dickenson, A.M. (44), (48)
Dictionary of Occupational Titles (DOT) (21),
(31), (45)
Differential Aptitude Battery (DAT) (31), (24)
Division of PAQ (2)
Does Not Apply (2), (8), (10), (11)
Dunnette, M.D. (47), (48)
Employee Aptitude Survey - Spatial (53)
Employee Aptitude Survey - Verbal (53)
Employee Aptitude Survey - Visual Speed and
Accuracy (53)
Employee Aptitude Survey Numerical (53)
Enrich jobs (45)
Enter-Act Users Manual (1)
Equal Pay Act (39)
Exempt (42)
Factor analysis (11), (31), (44), (47)
Fair Labor Standards Act (42)
Female dominated jobs (38)
Female equation (38)
Flanagan Industrial Tests Assembly (53)
Flanagan Industrial Tests Arithmetic (53)
Fleishman, E.A. (46), (48)
General Aptitude Test Battery (GATB) (19-24),
(26-31), (45)
Going rates (35), (38), (41)
Gordon, G.G. (4), (49)
Gutenberg, R.L. (28), (48)
Hagen, E. (19), (52)
Hahn, C.P. (46), (48)
Hakel, M.D. (44), (48)

61

Hall, H.L. (46), (49)
Harrell, T.W. (19), (48)
Harrell, M.S. (19), (48)
Hartigan, J.A. (18), (48)
Harvey, R. J. (8), (48)
Hayes, T. L. (8), (48)
Hunter, J.E. (18), (20), (27), (28)
Ilgen, D.R. (44), (49)
Information Input (2)
Information-processing (31)
Integrated human resources program (47)
Jeanneret, P.R. (4), (11), (19), (20), (28), (30),
(34), (36), (38), (39), (43), (46), (48), (49),
(50), (51)
Job Analysis Manual (1)
Job analysis methods (2), (46)
Job classification (36), (43), (45)
Job components (17-19), (35), (38), (44)
Job complexity (18)
Job Component Validity (JCV) (17-19), (21-
24), (26-28), (30), (31), (34)
Job context (2)
Job design (43), (45)
Job dimension scores (5), (13-15), (19), (22),
(24-26), (35-39), (43), (45), (47)
Job elements (2), (3), (5), (8), (11), (13), (14),
(18), (30), (43), (44), (47)
Job evaluation (35-39), (41)
Job family (43)
Job performance (17), (18), (19), (26), (28),
(29), (44) (47)
Job Prestige Scores (29) (45)
Job profile (43)
Lawshe, C.H. (17), (49)
Levine, E.L. (46), (49)
Long, E.R. (29), (52)
Lounsbury, J.W. (37), (48)
Male dominated jobs (38)
Male and female equations (38)
Manual dexterity (19), (27), (28), (33)
Marquardt, L.D. (30), (49)
McCaulley, M.H. (19), (51)
McCormick, E.J. (2), (4), (18), (19), (20), (23),
(30), (31), (35), (43), (44), (46), (49), (50),
(51)
Mecham, R.C. (4), (6), (8), (9), (15), (19), (20),
(21), (24-27), (31), (35), (37), (38), (42),
(49), (50), (51)
Mental Processes (2)
Minnesota Clerical Test (53)
Myers, I.B. (19), (25), (51)
Myers-Briggs Type Indicator (MBTI) (25), (26),
(30), (31), (34)
Natural selection (20)
Non-exempt (42)
Nunnally, J.C. (7), (10), (15)
Office of Women and Work (38)
On-line Users Manual (1)
Osborn, H.G. (28), (48)
Palmer, G.J. (4), (51)
PAQ computer outputs (28)
PAQ dimensions (28), (29), (36)
PAQ Users Manual (10), (1)
Passino, E.M. (37), (48)
Pattern analysis (43)
Pay differential (38)
Pay equity (35)
Pay grades (47)
Pay rate (35)
Perceptual tests (18), (22), (28), (33)
Performance
Performance appraisal (26), (43), (44), (47)
Performance requirements (47)
Personality
Personality attributes (30)
Personality characteristics (17)
Personality indices (26)
Personality requirements (26)
Personality variables (25), (34)
Personnel
Personnel requirements (17), (18), (30)
Personnel selection (17), (18), (20), (26),
(28), (43), (47)
Personnel Tests for Industry-Verbal (53)
Personnel Tests for Industry-Numerical
(53)
Peters, D.L. (4), (49)
Principal components analysis (11), (12)
Profile comparison (43)
Psychomotor test (18), (28), (33)
Relationships with Other Persons (5)
Reliability (5), (8), (9), (10), (14), (15), (29), (38),
(47)
Reliability coefficients (6), (8), (9), (10),
(15)
Reliability (influences on) (8)
Reliability (inter-analyst) (5), (7), (14),
(15)
Reliability of job dimension scores (14),
(15)
62

Reliability (rate-rerate) (14), (15)
Reliability report (10)
Response Scale (2), (47)
Response variability (9)
Revised Minnesota Paper Form Board (53)
Robinson, D.D. (37), (51)
Romashko, T. (46), (48)
Sackett, P.R. (44), (48)
Salary (35), (36), (37)
Schaie, K.W. (19), (51)
Schmidt, F.L. (18), (51)
Scoring (3) (29), (42)
Shaeffer, J.M. (29), (52)
Shaw, J.B. (23), (30), (31), (49), (51)
Short Employment Tests - Verbal (53)
Short Employment Tests Numerical (53)
Short Employment Tests - Clerical (53)
Sistrunk, F. (46), (49)
Specific Aptitude Test Battery (SATB) (20),
(21), (22), (31)
Standard error of measurement (5), (7), (8),
(9), (10), (15)
Stewart, N. (19), (51)
Strong, E.K. (19), (51)
Task-oriented (2)
Taylor, L.R. (33), (43), (46), (52)
Taylor, E.N. (36), (52)
Test validity (17), (18)
Test of Learning Ability (53)
Thorndike, R.L. (19), (52)
Treiman, D.J. (45), (52)
True score (10)
Tyler, L.E. (19), (52)
U.S. Fair Labor Standards Act (3), (42)
Unique equations (41)
United States Employment Service (USES)
(19), (20), (21), (22), (23), (26), (28)
Unwanted turnover (29)
Validity
Concurrent validity (17)
Construct validity (17)
Content validity (17)
Criterion-Related validity (17), (26)
Job Component Validity (JCV) (17-
19), (21-24), (26-28), (30), (31),
(33), (34)
Moderate validities (17)
Predictive validity (17)
Situational Validity (SV) (26-28)
Synthetic validity (17)
Validity coefficient (20), (22), (23),
(27), (28), (29), (33)
Validity Generalization (VG) (18), (20),
(27), (28)
Validity (test) (17)
Vineberg, R. (44), (52)
Wage (35), (36)
Wahlstrom, O.W. (37), (51)
Wigdor, A.K. (18), (48)
Wilcox, A.C. (42), (51)
Wonderlic Personnel Test (WPT) (24), (25), (30),
(31)
Wonderlic, E.F. (19), (30), (48)
Wonderlic, C.F. (29)
Work
Work activities (2), (17), (47)
Work behavior (9), (11), (41), (44)
Work output (2)
Workbook (1)
Worker Activity Profile (4)
Worker-oriented (2), (18), (37), (43), (44), (47)
Yerkes, R.M. (19), (52)

Potrebbero piacerti anche