Sei sulla pagina 1di 12

R E V I E W A R T I C L E

The An of Assessment in Psycholoyy:


[thics, [xpeftise, and Validity
James A. Cates
Indiana University—Purdue University at Fort Wayne

Psychological assessment is a hybrid, both art and science. The empirical


foundations of testing are indispensable in providing reliable and valid
data. At the level of the integrated assessnnent, however, science gives
way to art. Standards of reliability and validity account for the individual
instrument: they do not account for the integration of data into a compre-
hensive assessment. This article examines the current climate of psycho-
logical assessment, selectively reviewing the literature of the past decade.
Ethics, expertise, and validity are the components under discussion. Psy-
chologists can and do take precautions to ensure that the "art" of their
work holds as much merit as the science, © 1999 John Wiley & Sons,
Inc. J Clin Psychol 55: 6 3 1 - 6 4 1 , 1999.

The art of psychological assessment is an infrequent description in the current era, (For
the purposes of this article, "assessment" refers to the administration of multiple psycho-
logical tests, instruments, or techniques, as well as behavioral observation, to obtain a
pool of data.) Contributing factors include the increasingly cost-conscious control of
managed care (Marlowe, Wetzler, & Gibbings, 1992; Moreland, Fowler, & Honaker,
1994), which encourages a results-oriented, empirical approach; the increasing use of
highly reliable computer programs in testing (Butcher, 1994; Schlosser, 1991; Tallent,
1987); the increasing scrutiny of testing in the courtroom (Matarazzo, 1990; Skidmore,
1992); and the increasing sophistication of the field itself (Ritz, 1992; Watkins, 1992;
Wetzler, 1989). Both professional texts and journal articles expound the science of assess-
ment. The role of inference, intuition, and creativity is a peripheral observation, at best.
Still, the task of interpreting data in a reliable and valid manner and assembling the data
and their interpretation into a useable format for the client remains an art.
In giving expert testimony, I have sometimes mused on my response if an astute
attorney asked, "Doctor, would you please describe to the court the reliability, validity.

Correspondence conceming this article should be addressed to James A, Cates, Ph,D,, P,0, Box 5391, Fort
Wayne, IN 46895-5391,

JOURNAL OF CLINICAL PSYCHOLOGY, VoL 55(5), 631-641 (1999)


© 1999 John Wiley & Sons, Ine, CCC 0021-9762/99/050631-11
632 Journal of Clinical Psychology, May 1999

and error of measurement of an integrated battery of psychological tests?" The logical


response explains that scientific tools for these measures do not exist at the level of an
integrated assessment. As an example, the Minnesota Multipbasic Personality Inventory-2
(MMPI-2; Butcher & Williams, 1992) contains a basic clinical scale and several ancillary
measures of depression. Exner's (1993) Rorschach Scoring System includes a Depres-
sion Index. Does the integration of tbese data produce a highly reliable metaconstnict of
depression? Or are they sufficiently disparate constructs so that, despite tbe common
label, an assessnient should avoid considering tbem corroborative, or even complemen-
tary, evidence? Only recently bas the complementarity of these two instruments—alone
among many—been identified as an urgently needed area of research (Ganellen, 1996a,
1996b). In general, the care provided in the development of psychological tests overlooks
the tise of these techniques in combination in a battery.
If empirical measures of reliability and validity are unavailable at the level of the
integrated assessment battery, what paradigm (s) are best used? In practice, as tbis article
attempts to demonstrate, psychologists use professional judgment and clinical skills to
ensure the accuracy of an assessment. Altbougb many factors contribute to tbis process,
tbis article considers three of the principles fundamental to psychological assessment:
ethics, expertise, and validity. The increasing rigor in psychological testing, observable
through greater emphasis on empirically derived validity, representative sampling in nor-
mative groups, and appropriate applicatioh of testing, may at times obfuscate the need for
clinical judgment and skills, here described as the art of assessment. This literature review
encompasses tbe past decade, with an emphasis on articles referencing or relating to
assessment. Tbe time period covers the surge in managed care (and corresponding changes
in tbe role of the psychologist), dramatic increases in computer use and availability in the
field, and a rapidly expanding series of ethical codes.

ETHICS
The American Psychological Association bas for over a decade tnoved toward increas-
ingly specific standards of conduct. The Standards for Educational and Psychological
Measurement (American Educational Research Association, American Psycbological Asso-
ciation, and National Council on Measurement in Education, 1985), were closely fol-
lowed by the Guidelines for Computer-Based Tests and Interpretations (American
Psycbological Association, 1986). By 1991, issues in forensic psychology had become
sufficiently complex so that a special committee published tbe Specialty Guidelines for
Forensic Psychologists (Committee on Ethical Guidelines for Forensic Psychologists,
1991). In 1992, the American Psychological Association released tbe revised Ethical
Principles of Psychologists and Code of Conduct. Following tbis expansion of tbe code,
the American Psychological Association published Record-Keeping Guidelines (1993),
and Guidelines for Child Custody Evaluations in Divorce Proceedings (1994).
The increasing specificity of these codes serves to comprehensively address the eth-
ical dilemmas tbat routinely confront psycbologists. A simultaneous disadvantage to such
specificity is tbe potential for psychologists to overlook ethical dilemmas that are not
directly addressed. Aspirational goals, because of their idealistic and global format, may
be ignored or deemed inappropriate to the current situation.
Available information routinely addresses issues of test ethics. Adequacy, adminis-
tration, bias, and security are all concems that authors (Keith-Spiegel & Koocher, 1985;
Koocher, 1993) and ethical codes and principles delineate. However, a more global eth-
ical concern, wbich moves into the area of assessment, is competence. As Weiner (1989)
noted, "To sustain ethical practice of psychodiagnosis, psychologists need to combine
The Art of Assessment 633

good judgment with competence sustained by constant attention to newly emerging infor-
mation concerning what tests can and cannot do" (p, 830), Demonstrated competence or
expertise using a single test does not translate to competence in integrating data from
several assessment instruments, and psychologists are ethically compelled to practice
within the limits of the scope of our knowledge. Each evaluator, then, must answer the
question at a personal level: What principle(s) guide the selection of instruments for an
assessment? Further, by the combination of these instruments, what data can be obtained
or clarified that would otherwise be missing or vague?
Acting in the best interest of the client, indeed at times acting as an advocate for the
client, involves careful thought as to the potential long-term consequences of the data to
be obtained. This is a point of particular concern. For example, even if a computer-
generated narrative report is not considered a permanent part of the record and is destroyed,
an answer sheet remains as raw data, and can be obtained by a non-mental health pro-
fessional under certain conditions (Skidmore, 1992). Therefore, in the context of the
assessment, the psychologist must decide how much information regarding the client
needs to be available. Is the MMPI-2 (Butcher & Williams, 1992), with its plethora of
scales and interpretive possibilities the most appropriate instrument if the needed infor-
mation can be obtained from the Brief Symptom Inventory (Derogatis, 1993)? The Hare
Psychopathy Checklist-Revised (Hare, 1991) may provide useful information but also
highlights possible psychopathic behaviors. Would the MMPI-2 in this case serve equally
well, with less risk of a negative label for the client, should the report or raw data be
obtained by others less familiar with assessment techniques?
The question of amount of raw data obtained raises an ethical question in itself. For
psychologists working in outpatient settings, third-party payors demand increasing cost-
effectiveness via the minimal assessment necessary. The psychologist must carefully con-
sider a rationale not only to include the individual instrument in the battery, but to determine
what information that instrument uniquely and conjointly contributes to the assessment.
The obvious detriment of such an approach is the potential to omit much-needed data
because of the financial constraints imposed by a disinterested third party. On the other
hand, such limitations may serve as an impetus for the psychologist to increase the range
of assessment techniques used, and to better understand the efficacy of an individual tool.
The ethical issues surrounding an assessment have yet to be clearly codified, as the
aforementioned examples suggest. In no way does the lack of specific code expectations
exempt the psychologist from the need to consider the assessment in the broad context of
ethical behavior, advocating both in the present and potentially in the future for the best
interests of the client.

EXPERTISE
Psychological assessment itself remains a hotly debated pursuit, with little consensus in
the field. Although the majority of experts consider the assessment enterprise more com-
plex than testing (Beutler & Rosner, 1995; Matarazzo, 1990; Tallent, 1987; Zeidner &
Most, 1992), the view also prevails that testing and assessment are interchangeable (Hood
& Johnson, 1991; Sugarman, 1991) or that assessment is an ongoing activity in any
interaction with the client (Spengler, Strohmer, Dixon, & Shivy, 1995). Projective tech-
niques are enjoying a renaissance (Bellak, 1992; Watkins, 1994); projective techniques
are in decline (Goldstein & Hersen, 1990) and may even be unethical due to the deception
perpetrated on the client (Schweighofer & Coles, 1994), Despite the emphasis on more
focused and goal-oriented assessments (Wetzler, 1989) the call for a broader, psycho-
analytic perspective remains strong (Jaffe, 1990,1992),
634 Journal of Clinical Psychology, May 1999

This difference of opinion may well refiect the healthy state of psychological assess-
ment (Masling, 1992). Surveys of practicing psychologists repeatedly acknowledge the
routine use of assessment instruments (Archer, Maruish, Imhof, & Piotrowski, 1991;
Piotrowski & Keller, 1989; Piotrowski & Lubin, 1990; Watkins, Campbell, Nieberding,
& Hallmark 1995) and even suggest the significant amount of clinical time devoted to the
administration and interpretation of these tests (Ball, Archer, & Imhof, 1994). These
surveys indicate that a broad range of assessments (using both projective and objective
techniques) is common.
The student or novice practitioner of assessment finds numerous articles and texts
that describe the use of the psychological test. Important initial steps include the devel-
opment ofthe assessment question (Hood & Johnson, 1991), and outcome goals (Karoly,
1993). The practitioner is also reminded ofthe importance of using reliable and valid test
instruments (Beutler & Rosner, 1995; Smith & McCarthy, 1995; Zeidner & Most, 1992)
with appropriate normative standards (Nelson, 1994). (More advanced literature addresses
the potential for clinical observation from techniques designed for other purposes as
well—e.g., Kaufman, 1994.) The scientific literature carefully addresses appropriate stan-
dards for development, review, and use of the individual assessment instrument. The
combination of data from these instruments remains in the realm of clinical judgment.
Theoretical orientation has a minimal influence on instrument choice; the psychol-
ogist is likely to use a prescribed series of tests learned in graduate training (Fischer,
1992; Marlowe et al., 1992; Stout, 1992; Watkins, 1991). This reliance on a handful of
instruments is met with tempered skepticism. Wetzler (1989) noted, "No matter what the
referral question, they [psychologists] administer the standard battery. Their loyalty to
the standard battery is based on 40 years of clinical experience from which there now
exists a large body of knowledge on intensive personality assessment" (p. 7). Others are
more critical of current practices: "By insisting that we confront such perennial problems
as ovednterpretation, descriptor fallacy, and pseudoparallelism, our goal is the presenta-
tion of clinical data that is useful and not misleading" (Rogers, 1995, p. 295).
Problems arise because no empirical approach is available to determine the appro-
priateness of interpretations gleaned from a battery of assessment techniques. The psy-
chologist aspiring to a valid assessment battery can apply rigorous empirical standards to
the selection of individual tests. However, at the next level—combining these tests—the
psychologist must rely on personal experience or tum to another psychologist in a mentor
role for guidance. The common opinion suggests that the majority of psychologists fail to
adequately perform this task (e.g., Spengler et al., 1995).
This system of ensuring accuracy or reliability in the assessment is relatively weak.
And yet, the very reliance on a handful of techniques, which is heavily criticized, may
serve as a stabilizing force to ensure reliability. If, for example, the Wechsler scales (e.g.,
Wechsler, 1981), the MMPI-2, and the Rorschach Inkbiots are repeatedly administered as
a standard battery, the psychologist develops an expectation for the normative perfor-
mance. Marked deviations from that performance may reflect important clinical issues,
similar to Exner's (1993) hypothesis that marked deviations in form quality on the Ror-
schach refiect symbolically significant distortions of reality. The assessment is held to a
level of reliability for the individual psychologist, if not at a more global level.
As noted earlier, psychologists often fail to consider the importance of theoretical
orientation in the integration and interpretation of assessment data. Psychoanalytic/
dynamic theories provide the most comprehensive perspective (Jaffe, 1990, 1992; Sug-
arman, 1991). However, psychologists with other orientations can and routinely do make
equally valid use of the information gleaned in an assessment. Problems arise as a psy-
chologist fails to report assessment results within an overarching theoretical framework.
The Art of Assessment 635

For example, behavioral observations of the client may be based on observable, clearly
reported behaviors (e.g., "The client was early for the appointment, sat quietly, smiled
and made appropriate eye contact on greeting, and was cooperative throughout the bat-
tery of tests"). At the same time, an objective self-report inventory, with a strong empha-
sis on internally reported state, may suggest the presence of a severe depression and be
duly noted. If the behavioral observations reflected greater inference of the internal state
(e.g., "The client was early for the appointment and appeared somewhat anxious. She
was soft-spoken and seemed somewhat fatigued even before the testing began, despite
cooperation with all tasks"), the behavioral observation and computer-generated inter-
pretations might more readily match.
Critics of assessment techniques as practiced today also address the issue as if there
are two types of assessments: good assessments that accurately portray the client, the
client's issues, or both, and bad assessments that paint an inaccurate portrayal. Few cri-
tiques address the possibility that the reliability of assessments forms a distribution. It is
to be hoped that the distribution is normal, with the majority of assessments falling in an
acceptable midrange and not a positive skew, suggesting a preponderance of assessments
with questionable accuracy (a negative skew seems too hopeful!). Variables that infiu-
ence the effort to determine a reliable assessment include definition of an appropriate
assessment, the changing goal of the assessment based on the task at hand, and opera-
tional definitions of accuracy. As Masling (1992) noted

We are all agreed, Psy.D. and Ph.D., practicing psychologist and academician, that the ability
to use psychological assessment methods is a unique and valuable skill in clinical psychology.
Beyond that there is considerable controversy. It is something psychologists are uniquely
qualified to do, but what it is intended to do and how well we do it remain unspecified, (p. 53)

VALIDITY
The validity of a psychological test refers to its usefulness in a number of domains (for an
excellent review, see Cichetti, 1994). Does the content of the test adequately sample the
state or trait to be measured? Does the test appear to the client to measure what it purports
to measure; that is, does it exhibit face validity? Can the proposed factors or variables be
demonstrated; that is, does the test exhibit construct validity? Compared to a similar
measure or behavioral sample, does the correlation indicate a robust construct (does it
exhibit criterion-oriented validity)? Compared to a later measure or behavioral sample,
does the correlation indicate predictive power (does it exhibit predictive validity)? Does
the construct differ from unrelated constructs (discriminant validity), yet correlate with
related constructs (convergent validity)? These are the long-tested trials of validity applied
to the psychological test (Cichetti, 1994; Foster & Cone, 1995; Haynes, Richard, & Kubany,
1995; Messick, 1994).
In recent years, validity has grown beyond the individual test instrument. Although
not directly addressing the validity of a psychological assessment or battery of tests or
techniques, the arguments put forth in favor of a broader interpretation do hold promise
for this neglected issue.
Foster et al. (1995) referred indirectly to the broader consequences of administering
and interpreting a battery of tests: "Consequential validity goes beyond whether the mea-
sure fulfills its intended purposes and asks the larger question of whether it is consistent
with other social values" (p. 248). Thus, the validity of a measure extends beyond its
power to sample a unitary construct in an appropriately representative manner; the mea-
sure must also conform to expected social values that respect the rights of the client.
636 Journal of Clinical Psychology, May 1999

Messick (1994, 1995) approached the same concem from a different angle, arguing for
unified validity. The traditional evidence of validity is supplemented by consideration of
the interpretive or applied outcome ofthe test (Messick, 1994, 1995).
These views reflect the expanding domain of validity, particularly toward the appro-
priate use of an instrument. Such views have less to do with the instmment itself than the
application of the data obtained from the instmment. Application of that data within a
battery of tests then refers to its use in the context of a larger goal or purpose.
Cross-cultural psychological testing expands validity concems in still another direc-
tion directly applicable to the assessment issue (e.g., Kehoe & Tenopyr, 1994). Tests
using norms developed with Caucasian Americans may be inappropriately used with
African Americans due to a lack of understanding of African American culture, insuffi-
cient rapport, and subtle differences in item interpretations between African Americans
and Caucasians (Bryan, 1989). Too often, adaptation of tests for persons of another cul-
ture and language refers to translation of test items, with little concem for the integrity of
meaning in the translation, context of items, and appropriate standardization (Geisinger,
1994). Further, cross-cultural assessment must consider the continuum of acculturation,
from extremely traditional, with few ties to the dominant culture, to largely acculturated,
with few ties to traditions (Dana, 1995, 1996).
Computer-generated psychological tests have created yet another area of concem,
which again relates to the validity of the assessment battery. Computers may bestow tests
administered or interpreted through them with greater authority than they actually pos-
sess, due to the sophisticated scoring and reports generated (Butcher, 1994). Yet much of
computer-generated data relies on clinical interpretation and inference (Tallent, 1987).
Critics point to tbe need for criterion validity for computer measures and to the need to
establish a level of similarity with expert clinical opinion for interpretive programs (Butcher,
1987; Honaker & Fowler, 1990). Indeed, Butcher (1987) has noted, "The development of
valid psychological measures has lagged behind the rapid innovations in computer tech-
nology, and research on combining various psychological measures is rudimentary at
best" (p. 11).
Computers, with the narrative descriptions so frequently provided, give the impres-
sion of a tremendous increase in the information available from an individual instmment.
The temptation to administer an assessment of relatively brief duration that yields a rich
set of data (or perhaps more appropriately, a well-constmcted narrative report) can be
strong. Whether the information obtained is most appropriate in the context of the assess-
ment goals and whether the information is over or undemtilized is another matter entirely.
There is no empirical measure of the validity of a battery of tests. Indeed, as the
aforementioned discussions suggest, there is no uniform definition for the validity of a
battery of tests. Considering the context of the assessment and, subsequently, the context
of the tests to be used in that assessment serves as a means of maintaining appropriate
consequential (Foster & Cone, 1995) and unified (Messick, 1994) validity, a first step. At
a fundamental level, psychologists practice this routinely. Few would administer a bat-
tery of personality tests to a client referred to determine level of intelligence and aca-
demic skills, for example. At a more complex level, however, psychologists are asked to
make determinations of much finer discrimination. For example, what is the age of the
client referred for intelligence testing? Has the client personally requested testing, or has
a tbird party referred the assessment? What is actually needed—a breakdown of cogni-
tive strengths and weaknesses, or a more global measure of intelligence? Likewise, what
level of discrimination is needed in determining academic skills? Emotional and person-
ality functioning? These questions serve to place the assessment in context, lending valid-
ity to the subsequently obtained data and interpretation.
The Art of Assessment 63 7

SUMMARIZING ETHICS, EXPERTISE, AND VALIDITY


Ethics refers to the principles that guide the decision-making process as to the tests that
comprise the battery and integrating and reporting the data. Expertise refers to the knowl-
edge necessary to administer a battery of psychological techniques or conduct an assess-
ment in a reliable manner. And validity of the assessment refers to the goals that guide the
decision-making process as to the tests that comprise the battery and integrating and
reporting the data.

PRINCIPLES OF PSYCHOLOGICAL ASSESSMENT


The title of this article suggests that assessment is an art, as the need to rely on inference,
intuition, and expertise demonstrate. The following caveats and recommendations apply
to the needed ethical practices and skills and expertise to perform a valid psychological
assessment.

1. Art rests on science. Choosing reliable, valid, and appropriate assessment tools is
fundamental to adequate assessment. Even in the process of collecting nonstan-
dardized data, particularly behavioral observations, the psychologist applies the
principles of science. This includes careful consideration of the validity of obser-
vations, testing of hypotheses, and clarification of ambiguous data.
2. An assessment is a snapshot, not a film. No matter how exhaustive the battery of
assessment techniques, no matter how many corroborative sources, and no matter
how lengthy the assessment procedure, the assessment describes a moment frozen
in time, described from the viewpoint of the psychologist. Although results may
be indicative of long-term functioning, such results are nevertheless tentative and
should be treated with this awareness.
3. The appropriate assessment is tailored to the needs of the client, the referral
source, or both. A clothing store that offered the customer an expensive one-size-
fits-all garment would not long be in business. The temptation to remain with the
familiar is an easy one to rationalize but may serve the client poorly. Maintaining
familiarity with a variety of assessment techniques allows greater freedom in
tailoring an assessment to the requisite goals.
4. The psychologist should be responsible to the client, not the computer. In an age
of computer scoring, replete with well-developed narrative descriptions, the temp-
tation to take these interpretations at face value can be overwhelming. Psycholo-
gists must consider the validity of the behavioral data in addition to the validity
scales of the test and must also consider the validity of the narrative statements
generated by the scoring program,
5. Information is power. Assessment information is life-impacting power. The psy-
chologist does well to remember the significance clients place on the written
word of the report. If the client is an individual, the psychologist may be approached
with fear, anger, or awe, depending on the client's interpretation of the report. If
the client is a third party, the psychologist may be asked to perform increasingly
difficult or even unrealistic feats with psychological assessment techniques (e.g..
Did he sexually harass his coworkers? Was she really sexually abused? Should we
place him in a residential facility?). The psychologist bears a continuing respon-
sibility to educate consumers on the appropriate uses and limitations of psycho-
logical assessment techniques.
638 Journal of Clinical Psychology, May 1999

6. Assessment goes beyond description and into interpretation. The psychologist


completing the psychological assessment includes inference and clinical judg-
ment in combination with objective scores. It may be useful to know that a par-
ticular client has a Wechsler Full Scale IQ of 100, falling in the Average range. It
is even more useful to know that the psychologist believes that she was bored
with the test, and that this measurement is likely a minimum evaluation of her
intellectual abilities.
7. The psychologist is, without apology, projected into the report. The psychologist
functions as a professional. Reported views, opinions, and interpretations com-
prise the professional approach to behavior. These same views, opinions, and
interpretations comprise the framework of the report of a psychological assess-
ment. It is the ethical constraint ofthe psychologist performing assessments to be
aware of areas of personal strengths and weaknesses, and to guard against psy-
chologist error.
8. The accumulation of data is not an assessment. The integration of data and sub-
sequent interpretations comprise the assessment. The alphabetically arranged list
of words "and," "go," "I," "let," "then," "us," and "you" mean little. Arranged as
"Let us go then, you and I , . . ." they form the introductory line to one of the most
famous poems ofour century (T. S. Eliot, Complete Poems and Plays, 1971). So,
too, the rote presentation of assessment data means little. Only when these data
are integrated into meaningful descriptions of a person, family, or situation does
the utility of the assessment emerge.

In beginning this article, I mused on the question of an astute attorney. The response was
a brief apologia. Perhaps a better response would state, "An assessment is an art, but an
art grounded in psychological science. The expertise of the psychologist, the care to
validate the findings, and the care to ensure the ethical treatment of both these findings
and the client combine to create a rigorous and exacting standard, albeit one that has yet
to be statistically testable."

REFERENCES
American Educational Research Association, American Psychological Association, and the National
Council on Measurement in Education, (1985). Standards for educational and psychological
tests. Washington, DC: American Psychological Association.
American Psychological Association. (1989), Guidelines for computer-based tests and interpreta-
tion. Washington, DC: Author.
American Psychological Association. (1992). Ethical principles of psychologists and code of con-
duct. Washington, DC: Author.
American Psychological Association. (1993). Record-keeping guidelines. Washington, DC: Author.
American Psychological Association. (1994), Guidelines for child custody evaluations in divorce
proceedings. Washington, DC: Author.
Archer, R.P., Maruish, M., Imhof, E.A., & Piotrowski, C. (1991). Psychological test usage with
adolescent clients: 1990 survey findings. Professional Psychology: Research and Practice, 22,
247-252,
Ball, J.D,, Archer, R,P., & Imhof, E.A. (1994), Time requirements of psychological testing: A
survey of practitioners. Journal of Personality Assessment, 63, 239-249.
Bellak, L, (1992), Projective techniques in the computer age. Journal of Personality Assessment,
58, 445-453.
The Art of Assessment 639

Beutler, L.E., & Rosner, R. (1995). Introduction to psychological assessment. In L.E. Beutler & M.R.
Berren (Eds.), Integrativeassessmentof adultpersonality (pp. 1-93). New York: Guilford Press.
Bryan, P.E. (1989). Psychological assessment of Black Americans. Psychotherapy in Private Prac-
tice, 7, 141-154.
Butcher, J.N. (1987). The use of computers in psychological assessment: An overview of practices
and issues. In J.N. Butcher (Ed.), Computerized Psychological Assessment (pp. 3-25). New
York: Basic Books, Inc.
Butcher, J.N. (1994). Psychological assessment by computer: Potential gains and problems to avoid.
Psychiatric Annals, 24, 20-24.
Butcher, J.N., & Williams, CL. (1992). Essentials of MMPI-2 and MMPI-A interpretation. Min-
neapolis: University of Minnesota Press.
Cichetti, D.V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standard-
ized assessment instruments in psychology. Psychological Assessment, 6, 284-290.
Committee on Ethical Guidelines of Division 41 of the American Psychological Association ofthe
American Academy of Forensic Psychology. (1991). Specialty guidelines for forensic psy-
chologists. Law and Human Behavior, 6, 655-665.
Dana, R.H. (1995). Impact of the use of standard psychological assessment on the diagnosis and
treatment of ethnic minorities. In J.F. Aponte, R.Y. Rivers, & J. Wohl (Eds.), Psychological
interventions and cultural diversity (pp. 57-73). Boston, MA: Allyn and Bacon.
Dana, R.H. (1996). Culturally competent assessment practice in the United States. Joumai of Per-
sonahty Assessment, 66, 472-487.
Derogatis, L.R. (1993). Brief Symptom Inventory: Administration, scoring, and procedures manual
(3rd ed.). Minneapolis, MN: National Computer Systems.
Exner, J.E., Jr. (1993). The Rorschach: A comprehensive system: Vol. 1. Basic foundations (3rd
ed.). New York: John Wiley & Sons.
Fischer, CT. (1992). Humanizing psychological assessment. The Humanistic Psychologist, 20,
318-331.
Foster, S.L., & Cone, J.D. (1995). Validity issues in clinical assessment. Psychological Assessment,
7, 248-260.
Ganellen, R.J. (1996a). Integrating the Rorschach and MMPI-2 in personality assessment. Hills-
dale, NJ: Lawrence Eribaum Associates.
Ganellen, R.J. (1996b). Integrating the Rorschach and the MMPI-2: Adding apples and oranges?
Joumai of Personality Assessment, 67, 501-503.
Geisinger, K.F. (1994). Cross-cultural nomiative assessment: Translation and adaptation issues
influencing the normative interpretation of assessment instruments. Psychological Assess-
ment, 6, 304-312.
Goldstein, G., & Hersen, M. (1990). Historical perspectives. In G. Goldstein & M. Hersen (Eds.),
Handbook of Psychological Assessment (2nd ed.; pp. 3-17). New York: Pergamon Press.
Hare, R.D. (1991). The Hare Psychopathy Checklist-Revised. North Tonawanda, NY: Multi-Health
Systems.
Haynes, S.N., Richard, D.CS., & Kubany, E.S. (1995). Content validity in psychological assess-
ment: A functional approach to concepts and methods. Psychological Assessment, 7, 238-247.
Honaker, L.M., & Fowler, R.D. (1990). Computer-assisted psychological assessment. In G. Gold-
stein & M. Hersen (Eds.), Handbook of psychological assessment (pp. 521-545). New York:
Pergamon Press.
Hood, A.B., & Johnson, R.W. (1991). Use of assessment procedures in counseling. In Assessment
in counseling: A guide to the use of psychological assessment procedures (pp. 3-38). Alex-
andria, VA: American Counseling Association.
Jaffe, L.S. (1990). The empirical foundations of psychoanalytic approaches to psychological test-
ing. Joumai of Personality Assessment, 55, 746-755.
640 Joumai of Clinical Psychology, May 1999

Jaffe, L.S. (1992). The impact of theory on psychological testing: How psychoanalytic theory
makes diagnostic testing more enjoyable and rewarding. Joumai of Personality Assessment,
58, 621-630.
Karoly, P. (1993). Goal systems: An organizing framework for clinical assessment and treatment
planning. Psychological Assessment, 5, 273-280.
Kaufman, A.S. (1994). InteUigent testing with the WISC-III. New York: John Wiley & Sons.
Kehoe, J.F, & Tenopyr, M.L. (1994). Adjustment in assessment scores and their usage: A taxonomy
and evaluation of methods. Psychological Assessment, 6, 291-303.
Keith-Spiegel, P., & Koocher, G.P. (1985). Psychological assessment: Testing tribulations. In Eth-
ics in psychology: Professional standards and cases (pp. 87-114). New York: Random House.
Koocher, G.P. (1993). Ethical issues in the psychological assessment of children. In T.H. Ollendick
& M. Hersen (Eds.), Handbook of child and adolescent assessment. Boston: Allyn and Bacon.
Marlowe, D.B., Wetzler, S., & Gibbings, E.N. (1992). Graduate training in psychological assess-
ment: What Psy.D.'s and Ph.D.'s must know. The Joumai of Training and Practice in Profes-
sional Psychology, 6(2), 9-18.
Masling, J.M. (1992). Assessment and the therapeutic narrative. The Joumai of Training and Prac-
tice in Professional Psychology, 6(2), 53-58.
Matarazzo, J.D. (1990). Psychological assessment versus psychological testing: Validation from
Binet to the school, clinic, and courtroom. American Psychologist, 45, 999-1017.
Messick, S. (1994). Foundations of validity: Meaning and consequences in psychological assess-
ment. European Journal of Psychological Assessment, 10, 1-19.
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons'
responses and performance as scientific inquiry into score meaning. American Psychologist,
50, 741-749.
Moreland, K.L., Fowler, R.D., & Honaker, L.M. (1994). Future directions in the use of psycholog-
ical assessment for treatment planning and outcome assessment: Predictions and recommen-
dations. In M.E. Maruish (Ed.), The use of psychological testing for treatment planning and
outcome assessment (pp. 581-602). Hillsdale, NJ: Lawrence Eribaum Associates.
Nelson, L.D. (1994). Introduction to the special section on nomiative assessment. Psychological
Assessment, 6, 283.
Piotrowski, C , & Keller, J.W. (1989). Use of assessment in mental health clinics and services.
Psychological Reports, 64, 1298.
Piotrowski, C , & Lubin, B. (1990). Assessment practices of health psychologists: Survey of Divi-
sion 38 clinicians. Professional Psychology: Research and Practice, 21, 99-106.
Ritz, G.H. (1992). New tricks from an old dog: A critical look at the psychological assessment
issue. The Joumai of Training and Practice in Professional Psychology, 6(2), 67-73.
Rogers, R. (1995). Research: Current models and future directions. In Diagnostic and structured
interviewing (pp. 291-301). Odessa, FL: Psychological Assessment Resources.
Schlosser, B. (1991). The future of psychology and technology in assessment. Social Science Com-
puter Review, 9, 575-593.
Schweighofer, A., & Coles, E.M. (1994). Note on the definition and ethics of projective tests.
Perceptual and Motor Skills, 79, 51-54.
Skidmore, S.L. (1992). Assessment issues. Forensic Reports, 5, 169-177.
Smith, G.T., & McCarthy, D.M. (1995). Methodological considerations in the refinement of clin-
ical assessment instruments. Psychological Assessment, 7, 300-308.
Spengler, PM., Strohmer, D.C, Dixon, D.N., & Shivy, V.A. (1995). A scientist-practitioner model
of psychological assessment: Implications for training, practice, and research. The Counseling
Psychologist, 23, 506-534.
The Art of Assessment 641

Stout, CE, (t992). Psychological assessment training in professional schools: Literature review
and personal impressions. The Journal of Training and Practice in Professional Psychology, 6
(1), 14-21,
Sugarman, A, (1991), Where's the beef? Putting personality back into personality assessment.
Journal of Personality Assessment, 56, 130-144,
T. S, Eliot, Complete Poems and Plays, 1909-1950, (1971). New York: Harcourt, Brace, and World,
Tallent, N, (1987). Computer-generated reports: A look at the modem psychometric machine. Jour-
nal of Personality Assessment, 51, 95-108,
Watkins, C.E,, Jr. (1991). What have surveys taught us about the teaching and practice of psycho-
logical assessment? Journal of Personality Assessment, 56, 426-437,
Watkins, C.E,, Jr. (1992), Historical influences on the use of assessment methods in counseling
psychology. Counseling Psychology Quarterly, 5, 177-188,
Watkins, C.E., Jr, (1994), Do projective techniques get a "bum rap" from clinical psychology
training directors? Journal of Personality Assessment, 63, 387-389,
Watkins, C.E., Jr., Campbell, V.L,, Nieberding, R., & Hallmark, R, (1995), Contemporary practice
of psychological assessment by clinical psychologists. Professional Psychology: Research and
Practice, 26, 54-60,
Wechsler, WD, (1981), Wechsler Adult Intelligence Scale-Revised, San Antonio, TX: Psycholog-
ical Corporation,
Wetzler, S, (1989), Parameters of psychological assessment. In S, Wetzler & M,M, Katz (Eds,),
Contemporary approaches to psychological assessment (pp. 3-15), New York: Brunner/
Mazel,
Weiner, LB, (1989). On competence and ethicality in psychodiagnostic assessment. Journal of
Personality Assessment, 53, 827-831,
Zeidner, M,, & Most, R, (1992), An introduction to psychological testing. In M, Zeidner & R, Most
(Eds,), Psychological testing: An inside view (pp, 1-47), Palo Alto, CA: Consulting Psychol-
ogists Press, Inc.

Potrebbero piacerti anche