Abstract: | This paper reports the results obtained with a group of 24 14-year-old students when presented with a set of algebra tasks by the Leeds Modelling System, LMS. These same students were given a comparable paper-and-pencil test and detailed interviews some four months later. The latter studies uncovered several kinds of student misunderstandings that LMS had not detected. Some students had profound misunderstandings of algebraic notation: Others used strategies such as substituting numbers for variables until the equation balanced. Additionally, it appears that the student errors fall into several distinct classes: namely, manipulative, parsing, clerical, and “random.” LMS and its rule database have been enhanced as the result of this experiment, and LMS is now able to diagnose the majority of the errors encountered in this experiment. Finally, the paper gives a process-oriented explanation for student errors, and re-examines related work in cognitive modelling in the light of the types of student errors reported in this experiment. Misgeneralization is a mechanism suggested to explain some of the mal-rules noted in this study. |