Evaluating Equating Transformations in IRT Observed-Score and Kernel
Equating Methods |
| |
Authors: | Waldir Leô ncio,Marie Wiberg,Michela Battauz |
| |
Affiliation: | 1.Department of Statistical Sciences, University of Padua, Padua, Italy;2.Centre for Educational Measurement, Centre for Biostatistics and Epidemiology, University of Oslo, Oslo, Norway;3.Department of Statistics, Umeå School of Business, Economics and Statistics, Umeå University, Umeå, Sweden;4.Department of Economics and Statistics, University of Udine, Udine, Italy |
| |
Abstract: | Test equating is a statistical procedure to ensure that scores from different test forms can be used interchangeably. There are several methodologies available to perform equating, some of which are based on the Classical Test Theory (CTT) framework and others are based on the Item Response Theory (IRT) framework. This article compares equating transformations originated from three different frameworks, namely IRT Observed-Score Equating (IRTOSE), Kernel Equating (KE), and IRT Kernel Equating (IRTKE). The comparisons were made under different data-generating scenarios, which include the development of a novel data-generation procedure that allows the simulation of test data without relying on IRT parameters while still providing control over some test score properties such as distribution skewness and item difficulty. Our results suggest that IRT methods tend to provide better results than KE even when the data are not generated from IRT processes. KE might be able to provide satisfactory results if a proper pre-smoothing solution can be found, while also being much faster than IRT methods. For daily applications, we recommend observing the sensibility of the results to the equating method, minding the importance of good model fit and meeting the assumptions of the framework. |
| |
Keywords: | equating item response theory classical test theory psychometrics simulation statistics |
|
|