Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
http://cjb.sagepub.com/
Published by:
http://www.sagepublications.com
On behalf of:
Additional services and information for Criminal Justice and Behavior can be found at:
Email Alerts: http://cjb.sagepub.com/cgi/alerts
Subscriptions: http://cjb.sagepub.com/subscriptions
Reprints: http://www.sagepub.com/journalsReprints.nav
Permissions: http://www.sagepub.com/journalsPermissions.nav
Citations: http://cjb.sagepub.com/content/31/6/649.refs.html
10.1177/0093854804268746
CRIMINAL JUSTICE AND BEHAVIOR
Varela et al. / PERSONALITY TESTING IN LAW ENFORCEMENT
PERSONALITY TESTING IN
LAW ENFORCEMENT
EMPLOYMENT SETTINGS
A Meta-Analytic Review
JORGE G. VARELA
MARCUS T. BOCCACCINI
FORREST SCOGIN
JAMIE STUMP
ALICIA CAPUTO
The University of Alabama
Meta-analysis was used to (a) assess the overall validity of personality measures as predictors of
law enforcement officer job performance, (b) investigate the moderating effects of study design
characteristics on this relation, and (c) compare effects for commonly used instruments in this
setting. Results revealed a modest but statistically significant relation between personality test
scores and officer performance. Prediction was strongest for the California Psychological Inventory and weaker for the Minnesota Multiphasic Personality Inventory and Inwald Personality
Inventory. Effect sizes were larger for studies examining current job performance, as opposed to
future job performance. Implications for using personality tests in the law enforcement officer
hiring process are discussed, and recommendations for future research are provided.
Keywords: meta-analysis; police; law enforcement; personality assessment
649
650
such as supervisor and peer ratings of performance. Despite the growing number of studies in this area, there is a lack of quantitative integration in this literature. Because researchers in this field have used
different personality assessment instruments, different outcome measures, and different study designs, there is no clear consensus about
what can be predicted from law enforcement officers personality test
scores. The present study uses meta-analysis to provide a clearer
picture of the validity of personality measures in law enforcement
settings.
VALIDITY OF PERSONALITY MEASURES IN EMPLOYMENT SETTINGS
Several published meta-analyses have examined the validity of personality testing in employment settings. Findings from these metaanalyses are reviewed here for two purposes. First, the research methodologies used in these meta-analyses provide a basis for the design of
this study. Second, three of the meta-analyses included some data
from law enforcement settings, and their results provide an estimate of
the effect sizes that might be expected from a meta-analysis based on a
more comprehensive review of the existing law enforcement officer
performance literature.
Schmitt, Gooding, Noe, and Kirsch (1984) conducted a metaanalysis of 99 employee selection studies published in the Journal of
Applied Psychology or Personnel Psychology between 1964 and
1982.1 These researchers examined the effectiveness of several different types of predictors (personality measures, aptitude assessments,
physical ability measures) across several occupational groups (professional, managerial, clerical, sales, skilled, and unskilled). Performance criteria included both subjective (performance ratings) and
objective (turnover, achievement/grades, status changes, and wages)
measures. Using only personality measures as predictors, an overall
mean correlation of .149 was observed. A mean correlation of .206
was observed for subjective performance criteria, and mean correlations ranged from .121 (turnover) to .152 (achievement/grades) for
objective performance criteria. Schmitt et al. also found that effect
sizes varied depending on study design characteristics. Studies using
a concurrent design (data from incumbents) or purely predictive
design (recruit data not used for hiring decisions) produced larger
652
654
tion of personality test data (sometimes accompanied by clinical interview data) were used to predict performance. Under these circumstances (k = 10), she found similar effect sizes for the MMPI (.46) and
the CPI (.32). Second, she examined the predictive validity of individual test scales. Based on the pattern of her results, she concluded that
prediction was stronger for CPI scales (17 of 22 mean validity coefficients were significantly different from 0) compared to MMPI scales
(1 of 13 mean validity coefficients was significantly different from 0).
With respect to performance criteria, subjective and objective criteria
were predicted equally well (.20 and .25, respectively), as were training criteria and actual job performance criteria (.19 and .27,
respectively).
Despite the existence of the OBrien (1996) meta-analysis, there
are several reasons why further integration of this literature is needed.
First, OBrien included only published findings in her meta-analysis.
As a result, it is likely that her effect sizes are inflated, because journals tend to publish studies with significant findings. Indeed, Tett et al.
(1994) found that correlations between personality measures and job
performance indices were significantly larger in published studies
compared to unpublished studies. Second, OBriens comparison
of personality tests did not include the Inwald Personality Inventory (IPI; Inwald, Knatz, & Shushman, 1982), a measure specifically
designed for screening law enforcement applicants. According to the
publisher of the IPI, Hilson Research, Inc. (2000-2001), their instrument is used by more than 30% of the nations state police departments. Finally, OBriens meta-analysis has not been published and,
to our knowledge, has not been subjected to peer review.
PURPOSE OF THE CURRENT META-ANALYSIS
effects from a substantially larger number of law enforcement samples (both published and unpublished) compared to previous metaanalyses. We report effects for the overall validity of personality tests
in this setting and examine the impact of several moderator variables,
including predictor type (MMPI, CPI, IPI), study design characteristics, sample characteristics, and publication status (published vs.
unpublished findings).
METHOD
CASE SELECTION
Data for this study were retrieved from scholarly journals, books,
conference presentations, dissertations, theses, and unpublished reports
from practitioners and test publishers. Several methods were used to
identify relevant studies. First, searches of PsycInfo, Dissertation
Abstracts International, and the National Criminal Justice Research
Service were conducted to identify all references to personality
assessment in law enforcement settings. Second, all volumes of the
following journals were hand searched: Journal of Applied Psychology, Personnel Psychology, Professional Psychology: Research and
Practice, Journal of Police Science and Administration, Law and
Human Behavior, Behavioral Sciences and the Law, Criminal Justice
and Behavior, Journal of Police and Criminal Psychology, and Journal of Personality Assessment. Third, reference lists of already identified sources were reviewed. Fourth, a request for data was placed in
the American Psychological Association Division 18 (Psychologists
in Public Service) newsletter, and requests for data were submitted to
the Internet discussion forums of the International Association of
Chiefs of PolicePsychological Services Section and Division 41 of
the American Psychological Association (American Psychology
Law Society). Fifth, leading researchers in the field were contacted,
including Robin Inwald (Hilson Research), Robert Hogan (Hogan
Assessment Systems), George Hargrave, Dierdre Hiatt, Curt Bartol,
Larry Beutler, Stanley Azen, and Mark Axelberd. Using these methods, approximately 175 studies containing personality data were
identified.
656
Coders were used to extract relevant data and study design information from the identified studies. Coders were asked to identify data
that were appropriate for inclusion in the meta-analysis and to report
the number and type or types of statistical analyses used in each study.
Data were coded as being published if they were obtained from journal articles, books or book chapters, test manuals, or government
reports, and unpublished if they were obtained from theses, dissertations, conference presentations, or researchers. Officer characteristics, such as the average age and educational level of the sample, were
coded. Job performance predictors were identified, including the
names of personality tests and scales that were used. Measures of offi-
cer performance were identified and coded as either objective (reprimands, complaints, suspensions, days of work missed) or subjective
(supervisor or peer ratings). Performance criteria were also identified
as reflecting either training performance or on-the-job performance.
The amount of time that elapsed between personality testing and collection of performance data (measurement interval) was recorded.
Finally, the design of each study was classified on each of the following characteristics: (a) Were the personality tests administered and
used as part of the hiring process (screening) or as part of a research
project only (analogue)? (b) Were the personality tests used to predict
future job performance (predictive design) or were they administered
to incumbent officers to examine the relation between personality
dimensions and current performance (concurrent design)? (c) Did the
authors select tests and/or test subscales based on a priori hypotheses
about the relations between specific personality measures and performance indices (confirmatory design), or did they select personality
measures without a clear rationale for expecting significant results
(exploratory design)?
The quality of each study was rated using a modified version of the
Instrument for Evaluating Experimental Research Reports (IEERR)
(Suydam, 1968). The IEERR was developed for rating the quality of
studies comparing the effectiveness of different educational programs. For this study, modifications to the IEERR were made to
accommodate the nature of the literature under review.3 Items on the
IEERR are summed to provide a single study quality rating for each
study.
Coders initially received 2 hours of training and instruction concerning the proper use of the coding manual and data collection forms.
After the initial training was completed, each of the five coders was
given a practice set of studies to code. A second training session was
then conducted to clarify questions about the coding manual and data
collection forms. After the second training session, the coders collected the data used in the meta-analysis.
CODER AGREEMENT
658
particular test scale, the largest estimate was used because it led to the
smallest amount of correction. Test scales for which no reliability data
were available were left uncorrected.
The second correction made to observed correlations was for attenuation or enhancement due to range departure of predictor variables.
When the standard deviation of a sample differs from the population
standard deviation, the observed correlation is distorted. When the
standard deviation of the sample is smaller than the population standard deviation, the observed correlation is attenuated, and, conversely, when the sample standard deviation is larger than the population standard deviation, the observed correlation is inflated. Range
restriction is common in personnel selection settings because the sample under investigation has typically passed an initial screening and
represents a small proportion of all applicants. In the current metaanalysis, observed correlations were corrected for range departure
using the procedures recommended by Hunter and Schmidt (1994).
The crucial determinant of the magnitude of this correction is the ratio
of the sample standard deviation to the population standard deviation.
Correction for range departure could only be completed for studies
that reported sample standard deviations. Estimates of population
standard deviation were found in test manuals, handbooks, and published research.
The last correction made to observed single predictor correlations
was for attenuation due to dichotomization of performance variables.
The correction formula recommended by Hunter and Schmidt (1994)
was applied to studies reporting dichotomous variables.
According to the Hunter and Schmidt (1994) model of meta-analysis,
each of the three attenuating or enhancing factors is independent.
Thus, observed correlations were corrected for more than one factor
when applicable using the procedures recommended by Hunter and
Schmidt.
WITHIN-STUDY AGGREGATION
660
influenced by studies reporting numerous findings. Specifically, validity coefficients were aggregated according to categorical moderator
variables of interest. This aggregation procedure was used so that
the influence of moderator variables could be examined in the metaanalysis. If we had simply computed a single mean across all of the
findings in each study, the influence of moderator variables would
have been lost in the aggregation. For example, if a study used both
turnover and supervisor ratings as performance criteria, aggregating
across these criteria would make it impossible to examine differences
in the prediction of subjective and objective performance criteria.
The effects of the following categorical moderator variables were
examined (see Data Coding section above for variable descriptions):
predictor type (MMPI vs. CPI vs. IPI), nature of outcome measure
(subjective vs. objective, training vs. incumbent), study design (confirmatory vs. exploratory, concurrent vs. predictive, screening vs. analogue), and data source (published vs. unpublished).
Data aggregations within studies were made using sample-weighted
means so that coefficients based on larger samples were accorded
greater weight in the aggregation. Sample-weighted means were also
used to compute the average sampling error estimates (Hunter &
Schmidt, 1990).
TESTING THE SITUATIONAL SPECIFICITY HYPOTHESIS
The variance of a population correlation is used to test the situational specificity hypothesis. This test is conducted to determine if the
observed mean correlation generalizes across samples. If the observed
mean correlation generalizes across samples, it is not situationally
specific. A common way of evaluating the situational specificity
hypothesis is to use the three-fourths rule. This rule states that when
less than 75% of the population correlation variance is accounted for
by sampling error variance, the role of moderator variables should be
examined (Hunter & Schmidt, 1990). In these situations, the variance
not accounted for by sampling error may be attributable to systematic
sources, such as study design characteristics or differences in the
characteristics of the samples being studied.
The amount of variance explained by sampling error is calculated
by dividing the sampling error of the population correlation by the
662
78
13
11
41
18
72
53
41
59
21
49
30
Moderator
All samples
CPI
IPI
MMPI
Training
Performance
Subjective
Objective
Predictive
Concurrent
Screening
Analogue
8,168
3,616
10,185
1,661
5,962
7,115
3,820
9,747
2,049
2,537
6,940
11,725
.118
.162
.125
.199
.134
.138
.112
.143
.155
.100
.108
.134
N-Wtd. r
.009
.013
.009
.020
.009
.012
.006
.013
.006
.001
.010
.011
Obs.
Var. (r)d
.006
.009
.006
.013
.008
.006
.006
.007
.008
.004
.005
.007
Samp.
Error Var.e
65
67
68
66
89
49
100
59
100
100
52
64
.106
.148
.115
.181
.118
.128
.100
.129
.141
.090
.096
.122
Lower CI
(95%)g
.130
.176
.135
.217
.150
.148
.124
.157
.169
.110
.120
.157
Upper CI
(95%)h
.213
.226
.216
.228
.192
.239
.188
.231
.251
.196
.206
.218
Corr. NWtd. ri
1.77
2.23*
.18
1.37
z-Score
Diff.j
663
64
15
34
44
10,373
1,447
5,203
6,522
.133
.132
.112
.149
.010
.013
.012
.009
.007
.009
.007
.007
63
68
59
75
.121
.116
.075
.121
.145
.148
.149
.177
.224
.161
.182
.245
1.57
.03
Note. CPI = California Psychological Inventory; IPI = Inwald Personality Inventory; MMPI = Minnesota Multiphasic Personality Inventory.
a. Number of studies providing data to the given aggregation.
b. Number of individual participants contributing data to the given aggregation.
c. Observed sample-weighted mean correlation.
d. Variance in the observed correlations.
e. Sampling error variance.
f. Proportion of variance in the observed correlations due to sampling error.
g. Lower limit of 95% confidence interval (CI) around the observed sample-weighted mean correlation.
h. Upper limit of 95% confidence interval around the observed sample-weighted mean correlation.
i. Corrected sample-weighted mean correlation (measurement error, range restriction, discontinuity).
j. Significance test of the difference between uncorrected validity coefficient for the variable in this row and the row directly above it using a z
score.
k. See Table 2 for significance tests comparing the MMPI, CPI, and IPI.
*p < .05, two-tailed.
Exploratory
Confirmatory
Unpublished
Published
664
TABLE 2: Fishers z Score Values for Comparisons of the Minnesota Multiphasic Personality Inventory (MMPI), California Psychological Inventory (CPI), and Inwald Personality Inventory (IPI)
Instrument
CPI
MMPI
IPI
1.77
2.34*
MMPI
.44
Note. A positive z-score indicates that the column variable had a higher validity coefficient than the row variable. A negative z-score indicates that the row variable had a
higher validity coefficient than the column variable. Validity coefficients are provided in
Table 1.
*p < .05.
Results for the categorical moderator subgroup analyses are provided in Table 1. Table 2 contains Z-score values for comparisons
between the mean correlations of the CPI, IPI, and MMPI reported in
Table 1. The lower bound values of the 95% confidence intervals were
greater than 0 for the uncorrected validity coefficients for all of the
subgroups, indicating statistically significant relations between personality measures and performance for each subgroup. Sampling
error accounted for at least 75% of the variance of the uncorrected correlations for only 5 of the 15 groupings, suggesting that other variables may be moderating these effects.
Moderator analyses indicated two statistically significant differences. First, correlations were larger for studies using concurrent as
opposed to predictive designs (see Table 1). Second, prediction was
strongest for the CPI and lower for both the IPI and MMPI (see Table
2). The difference in the mean correlations of the CPI and IPI was
large enough to reach statistical significance, whereas the difference
between the CPI and MMPI was nearly large enough to achieve statistical significance (Z = 1.77, p < .10).
Moderator
Average age
Years of education
a
Measurement interval
b
Year of study
Study quality
38
12
49
75
80
r
.145
.283
.260
.099
.173
CONTINUOUS MODERATORS
Table 3 contains the sample-weighted correlations between continuous moderator variables and the observed overall validity coefficients. None of the continuous moderator correlations was large
enough to reach statistical significance.
DISCUSSION
666
668
.0
.0
.1
.1
.2
.2
.3
.3
.4
.4
.5
.5
.6
.6
.7
.7
.8
.8
.9
.9
1.0
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
02344
555677778999
0000012344444
555566666677777888
122224444
556677778889999
0000013334
566667778899
00112334
55559
111122
55667
0001
6
02
7
3
0
Note. Diagram includes 128 coefficients from 41 studies (published and unpublished).
Coefficients are multiple correlations (R) from regression analyses and phi coefficients
calculated from discriminant function analysis classification tables. A list of these studies is available from the first author (JGV).
670
REFERENCES
References marked with an asterisk indicate studies included in the meta-analysis.
*Anson, R. A., Mann, J. D., & Sherman, D. (1986). Niederhoffers cynicism scale: Reliability
and beyond. Journal of Criminal Justice, 14, 295-305.
Ash, P., Slora, K. B., & Britton, C. F. (1990). Police agency officer selection practices. Journal of
Police Science and Administration, 17, 258-269.
*Azen, S. P., Snibbe, H. M., & Montgomery, H. R. (1973). A longitudinal predictive study of success and performance of law enforcement officers. Journal of Applied Psychology, 57, 190192.
*Azen, S. P., Snibbe, H. M., Montgomery, H. R., Fabricatore, J., & Earle, H. H. (1974). Predictors of resignation and performance of law enforcement officers. American Journal of Community Psychology, 2, 79-86.
*Baehr, M. E., Furcon, J. E., & Froemel, E. C. (1968). Psychological assessment of patrolmen
qualifications in relation to field performance: The identification of predictors for overall
performance of patrolmen and the relation between predictors and specific patterns of
exceptional and marginal performance. Washington, DC: Government Printing Office.
*Band, S. R., & Manuele, C. A. (1987). Stress and police officer performance: An examination of
effective coping behavior. Journal of Police and Criminal Psychology, 3, 30-42.
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1-26.
*Bartol, C. R. (1982). Psychological characteristics of small-town police officers. Journal of
Police Science and Administration, 10, 58-63.
*Bartol, C. R. (1991). Predictive validation of the MMPI for small-town police officers who fail.
Professional Psychology: Research and Practice, 22, 127-132.
*Bartol, C. R., Bergen, G. T., Volckens, J. S., Knoras, K. M. (1992). Women in small-town policing: Job performance and stress. Criminal Justice and Behavior, 19, 240-259.
*Benner, A. W. (1991). The changing cop: A longitudinal study of psychological testing within
law enforcement. Unpublished doctoral dissertation, Saybrook Institute, San Francisco.
672
(Ed.), Police selection and evaluation: Issues and techniques (pp. 179-195). New York:
Praeger.
*Hess, L. R. (1972). Police entry tests and their predictability of score in police academy and
subsequent job performance. Unpublished doctoral dissertation, Marquette University, Milwaukee, WI.
*Hiatt, D., & Hargrave, G. E. (1988). MMPI profiles of problem peace officers. Journal of Personality Assessment, 52, 722-731.
*Hiatt, D., & Hargrave, G. E. (1988). Predicting job performance problems with psychological
screening. Journal of Police Science and Administration, 16, 122-125.
Hilson Research, Inc. (2000-2001). Testing/assessment services for public safety & security
[Brochure]. Kew Gardens, NY: Author.
*Hogan, R. (1971). Personality characteristics of highly rated policemen. Personnel Psychology,
24, 679-686.
*Hogan R., & Hogan J. (1995). Sheriff deputies. Hogan Personality Inventory manual. Tulsa,
OK: Hogan Assessment Systems.
*Hogan R., & Hogan J. (1995). Validity of the Hogan Personality Inventory for selecting police
officers in (anonymous). Tulsa, OK: Hogan Assessment Systems.
*Hogan R., & Hogan J. (1995). Validity of the Hogan Personality Inventory for selecting police
officers in an Ohio municipality. Tulsa, OK: Hogan Assessment Systems.
*Hooke, J. F., & Krauss, H. H. (1971). Personality characteristics of successful sergeant applicants. Journal of Law, Criminology, and Police Science, 62, 104-106.
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in
research findings. Newbury Park, CA: Sage.
Hunter, J. E., & Schmidt, F. L. (1994). Correcting for sources of artificial variation across studies.
In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 323-336). New
York: Russell Sage.
*Hwang, G. S. (1988). Validity of the California Psychological Inventory for police selection.
Unpublished masters thesis, North Texas State University, Denton.
*Inwald, R. E., & Brockwell, A. L. (1991). Predicting the performance of government security
personnel with the IPI and MMPI. Journal of Personality Assessment, 56, 522-535.
*Inwald, R. E., Flanagan, C. L., & Kaufman, J. C. (1991, August). Officer supervisory ratings
classifications. Paper presented at the annual convention of the American Psychological
Association, San Francisco.
*Inwald, R. E., Kaufman, J. C., & Solomon, R. (1991, August). IPI and HPP/SQ predictions of
peer ratings and class standings. Paper presented at the annual convention of the American
Psychological Association, San Francisco.
Inwald, R. E., Knatz, H., & Shusman, E. (1982). Inwald Personality Inventory manual. Kew Gardens, NY: Hilson Research.
*Inwald, R. E., & Patterson, T. (1990). Use of the IPI and HPP/SQ for predicting trainee performance in a government law enforcement agency. Kew Gardens, NY: Hilson Research.
*Inwald, R. E., & Sakales, S. R. (1982, August). Role of two personality screening measures to
identify on-the-job behavior problems of law enforcement officer recruits. Paper presented at
the annual convention of the American Psychological Association, Washington, DC.
*Inwald, R. E., & Shusman, E. J. (1984). The IPI and MMPI as predictors of academy performance for police recruits. Journal of Police Science and Administration, 12, 1-11.
*Inwald, R. E., & Shusman, E. J. (1984). Personality and performance sex differences of law
enforcement officer recruits. Journal of Police Science and Administration, 12, 339-347.
674
*Sanchione, C. D., Cuttler, M. J., Muchinsky, P. M., & Nelson-Gray, R. O. (1998). Prediction of
dysfunctional job behaviors among law enforcement officers. Journal of Applied Psychology, 83, 904-912.
*Saxe, S. J., & Reiser, M. (1976). A comparison of three police applicant groups using the
MMPI. Journal of Police Science and Administration, 4, 419-425.
Schmitt, N., Gooding, R. Z., Noe, R. A., & Kirsch, M. (1984). Metaanalyses of validity studies
published between 1964 and 1982 and the investigation of study characteristics. Personnel
Psychology, 37, 407-422.
*Schoenfeld, L. S., & Kobos, J. C. (1980). Screening police applicants: A study of reliability
with the MMPI. Psychological Reports, 47, 419-425.
*Schuerger, J. M., Kochevar, K. F., & Reinwald, J. E. (1982). Male and female corrections officers: Personality and rated performance. Psychological Reports, 51, 223-228.
Scogin, F., Schumacher, J. E., Gardner, J., & Chaplin, W. (1995). Predictive validity of psychological testing in law enforcement settings. Professional Psychology: Research and Practice, 26, 68-71.
*Sendo, J. A. (1972). A study of the potential use of the Mann Attitude Inventory in the selection
of police recruits. Unpublished doctoral dissertation, Michigan State University, East
Lansing.
*Serko, B. A. (1981). Police selection: A predictive study. Unpublished doctoral dissertation,
Florida School of Professional Psychology, Tampa.
*Shaw, J. H. (1986). Effectiveness of the MMPI in differentiating ideal from undesirable police
officer applicants. In J. T. Reese & H. A. Goldstein (Eds.), Psychological services for law
enforcement (pp. 91-95). Washington, DC: Government Printing Office.
*Shusman, E. J., Inwald, R. E., & Landa, B. (1984). Correction officer job performance as predicted by the IPI and MMPI: A validation and cross-validation study. Criminal Justice and
Behavior, 11, 309-329.
*Sterrett, M. R. (1984). The utility of the Bipolar Psychological Inventory for predicting tenure of
law enforcement officers. Unpublished doctoral dissertation, Claremont Graduate University, Claremont, CA.
*Super, J. T. (1995). Psychological characteristics of successful SWAT/tactical response team
personnel. Journal of Police and Criminal Psychology, 10, 60-63.
Suydam, M. N. (1968). An instrument for evaluating experimental educational research reports.
The Journal of Educational Research, 61(3), 200-203.
*Sweda, M. G. (1988). The Iowa Law Enforcement Personnel Study: Prediction of law enforcement job performance from biographical and personality variables. Unpublished doctoral
dissertation, University of Iowa, Iowa City.
*Swope, M. R. (1989). Validating state police trooper career performance with the Sixteen Personality Factor questionnaire. Unpublished doctoral dissertation, Wayne State University,
Detroit, MI.
*Talley, J. E., & Hinz, L. D. (1990). Performance prediction of public safety and law enforcement
personnel: A study in race and gender differences and MMPI subscales. Springfield, IL:
Charles C Thomas.
Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.
Tett, R. P., Jackson, D. N., Rothstein, M., & Reddon, J. R. (1994). Meta-analysis of personalityjob performance relations: A reply to Ones, Mount, Barrick, and Hunter (1994). Personnel
Psychology, 47, 157-172.
Tett, R. P., Jackson, D. N., Rothstein, M., & Reddon, J. R. (1999). Meta-analysis of bidirectional
relations in personality-job performance research. Human Performance, 12, 1-29.