首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Classification accuracy and efficiency of writing screening using automated essay scoring
Institution:1. University of Oregon, United States of America;2. University of British Columbia, Canada;1. Department of Psychological, Health & Learning Sciences, University of Houston, USA;2. Department of Educational and Counselling Psychology, and Special Education, The University of British Columbia, Canada
Abstract:The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the Project Essay Grade (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common Core-aligned English language arts state test. Second, the study explored whether a similar level of classification accuracy could be achieved with a more efficient form of the AES-screener with fewer writing prompts. Third, the classification accuracy of the AES-scored screeners was compared to that of screeners scored for word count. Students in Grades 3–5 (n = 185, 167, and 187, respectively) composed six essays in response to multiple writing-prompt screeners on six different randomly assigned topics, consisting of two essays in each of three different genres (narrative, informative, and persuasive). Receiver operating characteristic (ROC) curve analysis was used to assess classification accuracy and to identify multiple cut scores with associated sensitivity and specificity values, and positive and negative posttest probabilities. Results indicated that the AES-scored multi-prompt screener and screeners with fewer prompts yield acceptable classification accuracy, are efficient, and are more accurate than screeners scored for word count. Overall, results illustrate the viability of writing screening using AES.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号