PurposeWe sought to empirically assess the effect of predictor method characteristics (test form, item-type, and test-type) on retest score change associated with an invariant construct—general mental ability (GMA)—and to evaluate the effect of retesting on the criterion-related validity of assessments that vary in their susceptibility to retest effects.DesignThree hundred seven individuals completed a battery of GMA assessments. After a 6-week interval, participants returned to the testing site to retest using both alternate and identical forms of the initial assessments.FindingsGreater score gains were observed on assessments comprising heterogeneous item-types than homogeneous item-types, and on performance-based assessments than self-report assessments. However, despite variations in score gains, the relationships between the initial test scores and criterion scores were no different than the relationships between retest scores and criterion scores for all assessments.ImplicationsTests and procedures that reduce reliance on test- or item-specific knowledge and skill may help minimize score changes due to retesting across multiple administrations. Moreover, under the boundary conditions present in this study, the criterion-related validity of ability assessments may not be affected by increases in test-specific knowledge and skills.Originality/ValueDespite the prevalence and industry support of retesting, a comprehensive understanding of retest score change still eludes researchers and practitioners. This ambiguity may be due in part to neglecting the method-construct distinctions in the retest literature. This is the first report to explicitly utilize the method-construct distinction in an effort to examine the causes and consequences of retest effects. |