Sei sulla pagina 1di 7

1 Assessment Study Guide Objectives of Assessment: Understand role and purpose of assessment as part of professional practice Professional/Ethical issues

related to assessment Reliability and Validity (psychometrics) of assessment instruments Understand statistical concepts related to assessment (mean, standard deviation, t-scores, percentiles, etc.) Understand importance of obtaining background information via interview and developmental history Understand importance of behavioral observations as part of assessment process Terminology Assessment: ongoing, to provide information regarding client, gathering background, conducting behavioral observations, interpreting test scores, results of any tests we might use; making recommendations for treatment or next steps; a lot broader than what we might think of Testing: specific event; when youre administering a specific instrument Measurement: assignment of a number to describe someones performance, quantitative index Construct: idea, concept, psychological characteristics/traits the counselor is interested in measuring Operational definition: explained/defined precisely enough for the construct to be measured Standardized testing: uniform procedures for administration, scoring Reliability: consistency of assessment results (refers to a test score or a result, not the test itself) Validity: appropriateness of the interpretation and use made of assessment results; does test measure what we want it to measure?

2 Error: difference between true score and obtained score; may be attributed to a number of factors The nature of reliability -necessary but not sufficient condition for validity -several different methods for estimating reliability -assessed primarily with statistical indices -coefficients should range from 0-1 (closer to 1, stronger reliability) Methods of estimating reliability -Test-retest/stability- consistency of test over time; administer test to group, wait, administer again, then correlate -Alternate forms or equivalence- 2 alternate forms of assessment procedure are administered in close succession, then scores are correlated -Internal consistency (not appropriate for speed tests) -Split-half: involves splitting test items into 2 equivalent halves then correlating scores on 1st half with the scores on the 2nd half - Kuder-Richardson and coefficient alpha: formulas provide measures of internal consistency without splitting items in half; provide average of all possible split-half correlation -Interrator reliability: consistency of scores across 2 or more raters; useful for evaluating the reliability of subjective tasks Factors influencing reliability -# of items (more items, increased reliability) -spread of scores -stability of the construct -scoring criteria -sources of error (internal/external) Standard error of Measurement -True score: hypothetical score that would be obtained if the test was perfectly reliable -Obtained score: score student actually obtains -Confidence interval: a range of scores around the obtained score within which the true score is likely to fall Obtained score +/- 1 SEM = 68% confidence interval Obtained score +/- 2 SEM = 95% confidence interval

3 *without good reliability, we cant be very confident about the truth of the scores

The nature of validity -Validity is a matter of degree -No test or assessment procedure is equally valid for all purposes -Validity is seen as a unitary concept, and there are several sources for evidence of validity Types of validity evidence -Content-related evidence: test content should be based on the purpose of the instrument and the construct or behavioral domain that the test purports to measure (test specification & expert review are often used to provide validity evidence) -Convergent validity: different measure of the same construct are correlated with one another; high correlation coefficients are desirable -Discriminant validity: measure of different constructs are not significantly correlated with one another; low correlation coefficients desirable Criterion-related validity evidence (evidence based on relations to other variables) -Concurrent: how well does performance on the assessment procedure estimate current performance on a criterion? -Predictive: how well does performance on the assessment procedure predict future performance on a criterion? Factors influencing validity -Inappropriate vocabulary -Unclear/ambiguous directions -too few items -client characteristics (test-taker) -changes to administration procedures (test-giver) Relationship between reliability and validity -Reliability is the consistence of assessment results. Results of an assessment have high reliability if in repeated administration of a test it yields the same results. Validity refers to the appropriateness of the interpretation and use made of the assessment results. A valid test measures what we want it to measure. Reliability is a necessary, but not a sufficient condition for validity. Test results can be reliable and not valid, but they cannot be valid and not reliable.

Types of normative test scores -Percentile scores: example: 80th percentile= you scored better than 80% of the other test takers -the norm group should be described in the test manual -ask yourself: do the demographic characteristics of the norm group match those of the general population? -how many people are included in the norm group? Types of Tests Objective vs. Subjective tests -Objective tests involve no judgment from the test administrator (questions are yes/no; T/F) -Subjective tests involve judgment (Rorschach, Thematic apperception) Speed vs. Power tests -Speed tests involve easy questions, but are high in quantity of questions -Power tests start easy and get increasingly more difficult Cognitive vs. Affective tests -Cognitive: IQ and Achievement tests -Affective test non-cognitive aspects of personality Structured personality instruments vs. Projective techniques -Structured: MMPI -Projective: Thematic apperception, Rorschach Purpose of Assessment -Screening: Where do we go next? -Diagnosis and eligibility (DSM-IV diagnosis) -Making predictions Ethical Issues Related to Assessment -Confidentiality: Who has access to test results? -Informed consent -Validity of results -Competence: Who determines competence? -Use tests with adequate psychometric characteristics (reliability .7 or higher) -Explain results to clients in a way they can understand -Maintain test results, files -Test security -Copyright issues

5 -Dont use outdated or obsolete tests

Components of a psychological assessment report: Developmental history (background information) Identifying information Reason for referral (presenting problem) Family/demographic information Social/Relationship history Educational history Occupational history Medical history Previous psychological/psychiatric services Present level of psychosocial functioning Behavioral Observations Physical appearance Degree of cooperation Attention level Attitude Clinical interview Test observations Test results Interpretation Major tests: WAIS-IV -Scoring individual items correctly is most important -Clerical scores are tedious, but patience is important -Great responsibility in administering this test -Subjectivity in verbal section -Most common mistakes occur in misunderstanding of scoring criteria and carelessness in scoring -15 subtests (10 required for FSIQ) -Improved floor and ceilings in the WAIS-IV (lowest score, top score) -Expand FSIQ range (40-160)- 4 full SDs in either direction -Important to build rapport with test taker

6 -Important to make behavioral observations- note test takers physical appearance, motor impairments, vision/hearing problems, speech, attention level, mood, activity level Full Scale IQ consists of Verbal Comprehension Index + Perceptual Reasoning Index + Working Memory + Processing Speed Index

MMPI-II -Original MMPI had small norm groups -Purpose is for Dx of Axis I & II disorders; what is appropriate treatment plan? How will they react in particular stages of distress? -Test-takers might fake good or fake bad; this is the reason for the validity scales; shows how much confidence clinician should have in the scores -If validity scales are elevated, were not going to automatically think about the scores being inaccurate Major limitation of the MMPI-2 is the high item overlap; not really discrete categories -Code types are much more stable than individual subscales. -If there is no codetype, then you can interpret just the individual spikes/elevations -7 categories of info can be gained from MMPI-2 interpretation: Symptoms: individuals main areas of need Perceptions of the environment Reactions to stress: how do they deal with distress? Self-concept: how do they see themselves? Emotional/behavioral control: impulsivity Interpersonal relations Psychological resources: areas of strength, coping mechanisms Other tests reviewed: Woodcock-Johnson Tests of Achievement (WJ III ACH) Woodcock-Johnson Test of Cognitive Abilities, 3rd Edition (WJ-III) HARE Psychopathy Checklist-Revised (PCL-R) Universal Nonverbal Intelligence Test (UNIT) Stanford-Binet 5 Conners Comprehensive Behavior Rating Scales (Conners CBRS) Reynolds Intellectual Assessment Scales (RIAS)

Potrebbero piacerti anche