Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
179]
On: 15 May 2013, At: 13:40
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,
37-41 Mortimer Street, London W1T 3JH, UK
To cite this article: Robert F. Bornstein (2007): Toward a Process-Based Framework for Classifying Personality Tests: Comment
on , Journal of Personality Assessment, 89:2, 202-207
To link to this article: http://dx.doi.org/10.1080/00223890701518776
Meyer and Kurtz (2006) argued that the longstanding psychological test labels objective and
projective have outlived their usefulness, and invited further work focusing on alternative
terms for these measures. This Comment describes a framework for classifying personality
tests based on the psychological processes that occur as people respond to test stimuli. Because
an attribution process is involved in responding to both types of measures, those instruments
formerly called objective tests are labeled self-attribution tests, and those formerly classified as projective tests are labeled stimulus-attribution tests. The possibility of extending
the process-based framework beyond personality, to psychological tests in general, is also
discussed. Clinical and empirical implications of a process-based framework are considered.
Meyer and Kurtz (2006) argued persuasively that the longstanding psychological test labels objective and projective have outlived their usefulness, for two reasons. First,
these labels are misleading. As Meyer and Kurtz noted, objective tests hardly can be considered objective measures of
the psychological constructs they purport to assess; on the
contrary, scores on these tests are affected by myriad threats
to external validity, including respondent bias, subtle context/setting effects, and scorer error (Allard, Butler, Faust, &
Shea, 1995; Masling, 2002). Moreover, evidence supporting
the role of projection in shaping responses to projective
tests is inconclusive at best (Exner, 1989; Weiner, 2003);
extant data suggest only that a projection-like dynamic may
influence certain types of responses to ambiguous test stimuli
under certain limited circumstances (Bornstein, 2007).
The terms objective and projective are not only scientifically inaccurate, but problematic from a professional
standpoint as well. As Meyer and Kurtz (2006) pointed out,
the term objective carries with it an array of unwarranted
positive connotations, seeming to imply objectivity in prediction when in fact the term refers only to the mechanical
nature of test scoring. Conversely, because of its association
with Freudian theory and a plethora of flawed Rorschach
writings that have appeared over the decades, the term projective has acquired such a negative halo that even selective
and biased critiques of the validity of projective tests are
readily accepted by members of the professional community
(see, e.g., Wood, Nezworski, & Stejskal, 1996).
1 I use the term attribution in its broadest form here, not only to denote the
causal inferences that people draw regarding various internal experiences
and external events (Buehner & McGregor, 2006), but also to include the
neurocognitive mechanisms through which people automatically attribute
meaning to stimuli whose purpose and identity are unclear (see Kensinger
& Schacter, 2006).
Self-Attribution Tests
203
Instruments that traditionally have been identified as objective tests (e.g., the NEO Personality Inventory; Costa &
McCrae, 1985) typically take the form of questionnaires
wherein people are asked to (1) acknowledge whether or
not each of a series of trait-descriptive statements is true of
them; or (2) rate the degree to which these statements describe
them accurately. As McClelland, Koestner, and Weinberger
(1989) pointed out, such measures assess self-attributed
traits, motives, emotions, and need statescharacteristics
that a person acknowledges as being representative of his or
her day-to-day functioning and experience. Thus, these measures reasonably may be described as self-attribution tests.
In responding to self-attribution test items people typically turn their attention inward to determine if a given test
statement captures some aspect of their feelings, thoughts,
motives, or behaviors. Given the dynamics of memory and
memory distortion, coupled with peoples reliance on various
judgment heuristics to make decisions regarding self-relevant
events and experiences (Loftus & Davis, 2006; Schwarz
et al., 1991) however, self-attribution test scores will not necessarily yield accurate information regarding trait-related behaviors and experiences. Moreover, given recent research on
content nonresponsiveness (e.g., random or markedly acquiescent responding) it is clear that mostbut not alltestees
genuinely attempt to engage in and respond to these test
items via an introspective process (see Gallen & Berry, 1996;
Graham, 2000).
Self-attribution tests may be further divided into two categories: those wherein the person is asked to judge the applicability of a test item describing a longstanding pattern,
and those wherein they are asked to judge the applicability of a test item describing behavior or experiencing in the
here-and-now. For the most part trait-focused measures (e.g.,
the trait version of Spielbergers [1989] State-Trait Anxiety Inventory [STAI]) involve retrospection, whereas statefocused measures (e.g., the state version of the STAI) involve introspection aimed at elucidating current, ongoing
experience.2
Although responses to self-attribution tests typically begin
with introspection (and sometimes retrospection), another,
very different processdeliberate self-presentationoften
follows. Consider how a person might respond to the statement, I often have difficulty controlling my anger. Even if
one can recall numerous instances of angry outbursts, one still
might choose to respond No to that item to present oneself in
a positive light. (Conversely, one might choose to respond Yes
Stimulus-Attribution Tests
204
BORNSTEIN
DO THE ADVANTAGES OF A
PROCESS-BASED FRAMEWORK OUTWEIGH
ITS DISADVANTAGES?
In discussing limitations in the current terminology used
to classify personality tests Meyer and Kurtz (2006) raised
the possibility thatbecause any framework for grouping
psychological tests into categories is likely to be less than
perfectperhaps assessment psychologists should simply
identify each test by name and forego any attempt to create an
overarching classification scheme. Although identifying tests
by name rather than by category has the advantage of eliminating ambiguity that might arise when tests are grouped and
labeled, a process-based framework for classifying psychological tests offers several advantages in this regard.
205
Stimulus-Attribution
Performance-Based
Constructive
Observational
Informant-Report
Key characteristics
Representative tests
based framework helps link personality assessment to research in other areas of psychological science (e.g., cognitive,
social). Only when assessment research is embedded in ideas
and findings from mainstream psychology can we evaluate
rigorously the external validity of our hypotheses and models by scrutinizing the fit of these hypotheses and models
with those of neighboring subfields. Psychologists in other
areas have often utilized concepts from assessment research
(e.g., internal reliability, discriminant validity) to improve
the psychometric properties of their tests and measures; the
process-based framework can help assessment psychologists
use findings from other subfields (e.g., the dynamics of implicit and explicit memory, the dispositional and contextual
variables that influence causal attributions) to understand
more completely the psychological processes that underlie
responses to different assessment tools.
Psychologists Are Going to Group and Label
Tests Anyway
People classify things into categories and assign labels to
these categories; it is one of many strategies we use to process, encode, and store information more efficiently. Regardless of whether these things consist of dogs, foods, works
of art, or modes of transportation, studies suggest that once
people have developed some degree of familiarity with the
members of a group they intuitively divide members of that
group into subgroups based on properties of the individual
group members (Corter & Gluck, 1992; Rosch, 1975).
Thus, if assessment psychologists did not derive overarching frameworks and terminologies for classifying psychological tests, those who use, study, or critique these tests
would do it anyway. In this respect it is better that an organizing framework be made explicit (and the logic underlying
206
BORNSTEIN
the framework spelled out in detail) than that multiple contrasting frameworks and labels emerge in isolation among
different segments of the psychological community.
CONCLUSION
The process-based framework readily suggests possibilities
for studies focusing on convergences and divergences between scores from tests in different categories (Bornstein,
2002); this framework also can form a conceptual foundation for intracategory test score comparisons (e.g., contrasts
of results from state versus trait measures of a motive or affect
state, comparisons of results produced by inkblot interpretations versus thematic storytelling). Such studies will not
only help clinicians and researchers understand the factors
that lead to divergences between findings obtained within
a particular test category, but also may create a context
for research exploring the range of situational and dispositional factors that combine to affect intracategory test score
consistency.
Finally, it is worth noting that in certain respects the
process-based framework for classifying psychological tests
is analogous to the grouping of psychological syndromes into
overarching categories (e.g., mood disorders, anxiety disorders, personality disorders) in the DSM-IV. Although both
organizing schemes are imperfect, and one could argue that
certain tests (or syndromes) should be categorized or labeled
differently, in both cases the organizing framework facilitates clinical work and empirical study. Like the grouping of
syndromes in the DSM series, the proposed system for classifying psychological tests will evolve over time, improving as
new information emerges regarding various assessment tools.
ACKNOWLEDGMENTS
I thank Violeta Bianucci, Daniel Freeman, Mark Hilsenroth,
Michelle Sonnenberg, and all those who participated in the
JPA review process for their helpful comments on earlier
versions of this article.
REFERENCES
Allard, G., Butler, J., Faust, D., & Shea, M. T. (1995). Errors in handscoring objective personality tests. Professional Psychology: Research
and Practice, 26, 304308.
Bender, L. (1938). A visual-motor gestalt test and its clinical use. New York:
American Orthopsychiatric Association.
Blatt, S. J., Chevron, E. S., Quinlan, D. M., Schaffer, C. E., & Wein, S. J.
(1988). The assessment of qualitative and structural dimensions of object
representations (rev. ed.). Unpublished research manual, Yale University,
New Haven, CT.
Bornstein, R. F. (2002). A process dissociation approach to objectiveprojective test score interrelationships. Journal of Personality Assessment,
78, 4768.
Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, G., &
Simons, A. (1991). Ease of retrieval as information: Another look at the
availability heuristic. Journal of Personality and Social Psychology, 61,
195202.
Spielberger, C. D. (1989). State-trait Anxiety Inventory: A comprehensive
bibliography. Palo Alto, CA: Consulting Psychologists Press.
Weiner, I. B. (2003). Principles of Rorschach interpretation (2nd ed.).
Mahwah, NJ: Lawrence Erlbaum Associates.
Wood, J. M., Nezworksi, M. T., & Stejskal, W. J. (1996). The Comprehensive
System for the Rorschach: A critical evaluation. Psychological Science,
7, 310.
Robert F. Bornstein
Derner Institute of Advanced Psychological Studies
212 Blodgett Hall
Adelphi University
Garden City, NY 11530
Email: bornstein@adelphi.edu
Received January 23, 2007
Revised June 15, 2007
207