首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Automatic grading and hinting in open-ended text questions
Institution:Volgograd State Technical University, Lenin Ave, 28, Volgograd 400005, Russia
Abstract:Open-ended text questions provide better assessment of learner’s knowledge, but analysing answers for this kind of questions, checking their correctness, and generating of detailed formative feedback about errors for the learner are more difficult and complex tasks than for closed-ended questions like multiple-choice.The analysis of answers for open-ended questions can be performed on different levels. Analysis on character level allows to find errors in characters’ placement inside a word or a token; it is typically used to detect and correct typos, allowing to differ typos from actual errors in the learner’s answer. The word-level or token-level analysis allows finding misplaced, extraneous, or missing words in the sentence. The semantic-level analysis is used to capture formally the meaning of the learner’s answer and compare it with the meaning of the correct answer that can be provided in a natural or formal language. Some systems and approaches use analysis on several levels.The variability of the answers for open-ended questions significantly increases the complexity of the error search and formative feedback generation tasks. Different types of patterns including regular expressions and their use in questions with patterned answers are discussed. The types of formative feedback and modern approaches and their capabilities to generate feedback on different levels are discussed too.Statistical approaches or loosely defined template rules are inclined to false-positive grading. They are generally lowering the workload of creating questions, but provide low feedback. Approaches based on strictly-defined sets of correct answers perform better in providing hinting and answer-until-correct feedback. They are characterised by a higher workload of creating questions because of the need to account for every possible correct answer by the teacher and fewer types of detected errors.The optimal choice for creating automatised e-learning courses are template-based open-ended question systems like OntoPeFeGe, Preg, METEOR, and CorrectWriting which allows answer-until-correct feedback and are able to find and report various types of errors. This approach requires more time to create questions, but less time to manage the learning process in the courses once they are run.
Keywords:e-learning  Automatic error recognition  Regular expressions  Editing distances  Computational linguistics  Ontology  Formative feedback
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号