首页 | 本学科首页   官方微博 | 高级检索  
     


Using Automated Essay Scores as an Anchor When Equating Constructed Response Writing Tests
Authors:Russell G. Almond
Affiliation:Department of Educational Psychology and Learning Systems, Florida?State University
Abstract:Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common objective of many writing prompts) can be used in place of a more traditional anchor. The linear-logistic equating method used in this article is a variant of the Tucker linear equating method appropriate for the limited score range typical of essays. The procedure is applied to historical data. Although the procedure only results in small improvements over identity equating (not equating prompts), it does produce a viable alternative, and a mechanism for checking that the identity equating is appropriate. This may be particularly useful for measuring rater drift or equating mixed format tests.
Keywords:automated essay scoring  constructed response  equating  writing assessment
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号