An examination of interrater reliability for scoring the Rorschach Comprehensive System in eight data sets |
| |
Authors: | Meyer Gregory J Hilsenroth Mark J Baxter Dirk Exner John E Fowler J Christopher Piers Craig C Resnick Justin |
| |
Affiliation: | Department of Psychology, University of Alaska, Anchorage 99508, USA. afgjm@uaa.alaska.edu |
| |
Abstract: | In this article, we describe interrater reliability for the Comprehensive System (CS; Exner. 1993) in 8 relatively large samples, including (a) students, (b) experienced re- searchers, (c) clinicians, (d) clinicians and then researchers, (e) a composite clinical sample (i.e., a to d), and 3 samples in which randomly generated erroneous scores were substituted for (f) 10%, (g) 20%, or (h) 30% of the original responses. Across samples, 133 to 143 statistically stable CS scores had excellent reliability, with median intraclass correlations of.85, .96, .97, .95, .93, .95, .89, and .82, respectively. We also demonstrate reliability findings from this study closely match the results derived from a synthesis of prior research, CS summary scores are more reliable than scores assigned to individual responses, small samples are more likely to generate unstable and lower reliability estimates, and Meyer's (1997a) procedures for estimating response segment reliability were accurate. The CS can be scored reliably, but because scoring is the result of coder skills clinicians must conscientiously monitor their accuracy. |
| |
Keywords: | |
本文献已被 PubMed 等数据库收录! |
|