首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Examiner error in curriculum-based measurement of oral reading
Authors:Kelli D Cummings  Gina Biancarosa  Andrew Schaper  Deborah K Reed
Institution:1. Center on Teaching and Learning, University of Oregon, USA;2. Educational Methodology, Policy, and Leadership, University of Oregon, USA;3. Florida Center for Reading Research, Florida State University, USA
Abstract:Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.
Keywords:Assessment fidelity  Curriculum-based measurement  Progress monitoring  CBM-R  Reading assessment  DIBELS
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号