Evaluating Cross-Lingual Equating |
| |
Abstract: | Adapting educational tests from 1 language to others requires equating across the different language versions to be able to compare examinees from different language groups. Such equating is usually based on translated items considered to have similar content and psychometric characteristics in both source and target languages. However, because it is not possible to ascertain that these items are really similar in different languages, it is difficult to control and validate the equating outcome. The purpose of this study was to develop a method for evaluating cross-lingual equating and apply it to the Psychometric Entrance Test (PET) used for admission to Israeli universities. This test is written in Hebrew (the source-language, SL) and translated into 5 languages. A cross-lingual equating in a double-linking plan was performed in each of 12 forms translated to 1 target language (TL1), and each of 9 forms translated to another target language (TL2). The average difference between the equating results in the 2 links, indicating the overall instability incorporated in the equating process, was more than 10 times the standard error of equating in the TL1-SL process and about half this size in the TL2-SL process. The significance of the results and the differences found between the 2 TLs are discussed, as well as the potential displayed by the method for use as a general evaluative tool for cross-lingual equating. |
| |
Keywords: | |
|
|