Effects of the use of percentage agreement on behavioral observation reliabilities: A reassessment |
| |
Authors: | Hoi K. Suen Patrick S. C. Lee |
| |
Affiliation: | (1) Special Projects, Northern Illinois University, 60115 DeKalb, Illinois;(2) The Pennsylvania State University, 16802 University Park, Pennsylvania |
| |
Abstract: | The percentage agreement index has been and continues to be a popular measure of interobserver reliability in applied behavior analysis and child development, as well as in other fields in which behavioral observation techniques are used. An algebraic method and a linear programming method were used to assess chance-corrected reliabilities for a sample of past observations in which the percentage agreement index was used. The results indicated that, had kappa been used instead of percentage agreement, between one-fourth and three-fourth of the reported observations could be judged as unreliable against a lenient criterion and between one-half and three-fourths could be judged as unreliable against a more stringent criterion. It is suggested that the continued use of the percentage agreement index has seriously undermined the reliabilities of past observations and can no longer be justified in future studies. |
| |
Keywords: | reliability agreement scores interobserver reliability |
本文献已被 SpringerLink 等数据库收录! |