Interobserver Agreement in Behavioral Research: Importance and Calculation |
| |
Authors: | Marley W. Watkins Miriam Pacheco |
| |
Affiliation: | (1) Department of Educational and School Psychology, The Pennsylvania State University, University Park, PA |
| |
Abstract: | Behavioral researchers have developed a sophisticated methodology to evaluate behavioral change which is dependent upon accurate measurement of behavior. Direct observation of behavior has traditionally been the mainstay of behavioral measurement. Consequently, researchers must attend to the psychometric properties, such as interobserver agreement, of observational measures to ensure reliable and valid measurement. Of the many indices of interobserver agreement, percentage of agreement is the most popular. Its use persists despite repeated admonitions and empirical evidence indicating that it is not the most psychometrically sound statistic to determine interobserver agreement due to its inability to take chance into account. Cohen's (1960) kappa has long been proposed as the more psychometrically sound statistic for assessing interobserver agreement. Kappa is described and computational methods are presented. |
| |
Keywords: | interobserver agreement kappa interrater reliability observer agreement |
本文献已被 SpringerLink 等数据库收录! |