首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The validity of cognitive ability tests is often interpreted solely as a function of the cognitive abilities that these tests are supposed to measure, but other factors may be at play. The effects of test anxiety on the criterion related validity (CRV) of tests was the topic of a recent study by Reeve, Heggestad, and Lievens (2009) (Reeve, C. L., Heggestad, E. D., & Lievens, F. (2009). Modeling the impact of test anxiety and test familiarity on the criterion-related validity of cognitive ability tests. Intelligence, 37, 34?41.). They proposed a model on the basis of classical test theory, and concluded on the basis of data simulations that test anxiety typically decreases the CRV. In this paper, we view the effects of test anxiety on cognitive ability test scores and its implications for validity coefficients from the perspective of confirmatory factor analysis. We argue that CRV will be increased above the effect of targeted constructs if test anxiety affects both predictor and criterion performance. This prediction is tested empirically by considering convergent validity of subtests in five experimental studies of the effect of stereotype threat on test performance. Results show that the effects of test anxiety on cognitive test performance may actually enhance the validity of tests.  相似文献   

2.
Claims of changes in the validity coefficients associated with general mental ability (GMA) tests due to the passage of time (i.e., temporal validity degradation) have been the focus of an on-going debate in applied psychology. To evaluate whether and, if so, under what conditions this degradation may occur, we integrate evidence from multiple sub-disciplines of psychology. The temporal stability of construct validity is considered in light of the evidence regarding the differential stability of g and the invariance of measurement properties of GMA tests over the adult life-span. The temporal stability of criterion-related validity is considered in light of evidence from long-term predictive validity studies in educational and occupational realms. The evidence gained from this broad-ranging review suggests that temporal degradation of the construct- and criterion-related validity of ability test scores may not be as ubiquitous as some have previously concluded. Rather, it appears that both construct and criterion-related validity coefficients are reasonably robust over time and that any apparent degradation of criterion-related validity coefficients has more to do with changes in the determinants of task performance and changes in the nature of the criterion domain rather temporal degradation per se (i.e., the age of the test scores). A key exception to the conclusion that temporal validity degradation is more myth than reality concerns decision validity. Although the evidence is sparse, it is likely that the utility of a given GMA test score for making diagnostic decisions about an individual deteriorates over time. Importantly, we also note several areas in need of additional and more rigorous research before strong conclusions can be supported.  相似文献   

3.
Using a latent variable approach, the authors examined whether retesting on a cognitive ability measure resulted in measurement and predictive bias. A sample of 941 candidates completed a cognitive ability test in a high-stakes context. Results of both the within-group between-occasions comparison and the between-groups within-occasion comparison indicated that no measurement bias existed during the initial testing but that retesting induced both measurement and predictive bias. Specifically, the results suggest that the factor underlying the retest scores was less saturated with g and more associated with memory than the latent factor underlying initial test scores and that these changes eliminated the test's criterion-related validity. This study's implications for retesting theory, practice, and research are discussed.  相似文献   

4.
In the theory of test validity it is assumed that error scores on two distinct tests, a predictor and a criterion, are uncorrelated. The expected-value concept of true score in the calssical test-theory model as formulated by Lord and Novick, Guttman, and others, implies mathematically, without further assumptions, that true scores and error scores are uncorrelated. This concept does not imply, however, that error scores on two arbitrary tests are uncorrelated, and an additional axiom of “experimental independence” is needed in order to obtain familiar results in the theory of test validity. The formulas derived in the present paper do not depend on this assumption and can be applied to all test scores. These more general formulas reveal some unexpected and anomalous properties of test validty and have implications for the interpretation of validity coefficients in practice. Under some conditions there is no attenuation produced by error of measurement, and the correlation between observed scores sometimes can exceed the correlation between true scores, so that the usual correction for attenuation may be inappropriate and misleading. Observed scores on two tests can be positively correlated even when true scores are negatively correlated, and the validity coefficient can exceed the index of reliability. In some cases of practical interest, the validity coefficient will decrease with increase in test length. These anomalies sometimes occur even when the correlation between error scores is quite small, and their magnitude is inversely related to test reliability. The elimination of correlated errors in practice will not enhance a test's predictive value, but will restore the properties of the validity coefficient that are familiar in the classical theory.  相似文献   

5.
Previous studies have concluded that cognitive ability tests are not predictively biased against Hispanic American job applicants because test scores generally overpredict, rather than underpredict, their job performance. However, we highlight two important shortcomings of these past studies and use meta-analytic and computation modeling techniques to address these two shortcomings. In Study 1, an updated meta-analysis of the Hispanic–White mean difference (d-value) on job performance was carried out. In Study 2, computation modeling was used to correct the Study 1 d-values for indirect range restriction and combine them with other meta-analytic parameters relevant to predictive bias to determine how often cognitive ability test scores underpredict Hispanic applicants’ job performance. Hispanic applicants’ job performance was underpredicted by a small to moderate amount in most conditions of the computation model. In contrast to previous studies, this suggests cognitive ability tests can be expected to exhibit predictive bias against Hispanic applicants much of the time. However, some conditions did not exhibit underprediction, highlighting that predictive bias depends on various selection system parameters, such as the criterion-related validity of cognitive ability tests and other predictors used in selection. Regardless, our results challenge “lack of predictive bias” as a rationale for supporting test use.  相似文献   

6.
The main objectives in this research were to introduce the concept of team role knowledge and to investigate its potential usefulness for team member selection. In Study 1, the authors developed a situational judgment test, called the Team Role Test, to measure knowledge of 10 roles relevant to the team context. The criterion-related validity of this measure was examined in 2 additional studies. In a sample of academic project teams (N = 93), team role knowledge predicted team member role performance (r = .34). Role knowledge also provided incremental validity beyond mental ability and the Big Five personality factors in the prediction of role performance. The results of Study 2 revealed that the predictive validity of role knowledge generalizes to team members in a work setting (N = 82, r = .30). The implications of the results for selection in team environments are discussed.  相似文献   

7.
《人类行为》2013,26(3):267-269
The effects of motivated distortion on forced-choice (FC) and normative inventories were examined in three studies. Study 1 examined the effects of distortion on the construct validity of the two item formats in terms of convergent and discriminant validity. The results showed that both types of measures were susceptible to motivated distortion, however the FC items were better indicators of personality and less related to socially desirable responding when participants were asked to respond as if applying for a job. Study 2 considered the criterion-related validity of the inventories in terms of predicting supervisors' ratings of job performance, finding that distortion had a more deleterious effect on the validity of the normative inventory with some enhancement of the validity of the FC inventory being observed. Study 3 investigated whether additional constructs are introduced into the measurement process when motivated respondents attempt to increase scores on FC items. Results of Study 3 indicated that individuals higher in cognitive ability tend to have more accurate theories about which traits are job-related and therefore are more successful at improving scores on FC inventories. Implications for using personality inventories in personnel selection are discussed.  相似文献   

8.
Individuals vary in how they perceive cognitive ability tests; thus, it is useful for organizations to consider how individual differences influence applicant perceptions of selection tools. The present study examined the influence of implicit theories of ability and locus of control on perceptions of face validity and predictive validity for two cognitive ability tests. Relationships between perceptions and test experience, job‐relevant experience, and job familiarity were also examined. Interactions between implicit theories and self‐assessed performance in predicting perceptions were found, although not of the form hypothesized. Furthermore, job familiarity and prior success in selection contexts were related to perceptions. Finally, sample type interacted with test type to influence perceptions. Implications for selection system design and research on applicant perceptions are discussed.  相似文献   

9.
Decision‐making researchers purport that a novel cognitive ability construct, cognitive reflection, explains variance in intuitive thinking processes that traditional mental ability constructs do not. However, researchers have questioned the validity of the primary measure because of poor construct conceptualization and lack of validity studies. Prior studies have not adequately aligned the analytical techniques with the theoretical basis of the construct, dual‐processing theory of reasoning. The present study assessed the validity of inferences drawn from the cognitive reflection test (CRT) scores. We analyzed response processes with an item response tree model, a method that aligns with the dual‐processing theory in order to interpret CRT scores. Findings indicate that the intuitive and reflective factors that the test purportedly measures were indistinguishable. Exploratory, post hoc analyses demonstrate that CRT scores are most likely capturing mental abilities. We suggest that future researchers recognize and distinguish between individual differences in cognitive abilities and cognitive processes.  相似文献   

10.
The purpose of this study was to develop and validate a construct-based situational judgment test of the HEXACO personality dimensions. In four studies, among applicants, employees, and Amazon Mechanical Turk participants (Ns = 72–305), we showed that it is possible to assess the six personality dimensions with a situational judgment test and that the criterion-related validity of the situational judgment test is comparable to the criterion-related validity of traditional self-reports but lower than the criterion-related validity of other-reports of personality. Test–retest coefficients (with a time interval of 2 weeks) varied between .55 and .74. Considering personality is the most commonly assessed construct in employee selection contexts (Ryan et al., 2015), this situational judgment test may provide human resources professionals with an alternative assessment tool.  相似文献   

11.
Two studies examined the effects of cognitive test anxiety on students' memory, comprehension, and understanding of expository text passages in situations without externally‐imposed evaluative pressure. The results gathered through structural equations modelling demonstrated a significant impact of cognitive test anxiety on performance in conditions with and without external evaluative pressure. The impact of cognitive test anxiety was stronger in those conditions with external evaluative pressure. The results are interpreted to support processing models of test anxiety that propose test anxiety interferes with learning through deficiencies in encoding, organization, and storage in addition to the classic interpretation of retrieval failures. In addition, the data provide support for additive models of test anxiety that address both stable and situational factors in the overall impact of cognitive test anxiety on performance. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
This study examined various psychometric properties of Forms A and B of the Situation Test, developed by Rehm and Marston (Journal of Consulting and Clinical Psychology, 1968, 37, 565–574) for the assessment of heterosocial skill and anxiety. A third test form composed of heterosocially irrelevant items was also examined for comparison purposes. Split-half, alternate-form, and interresponse consistency was determined for measures of skill, anxiety, response latency, and response duration. Differences across test forms on these measures were also investigated. Subsequently, criterion-related validity was examined relative to three criteria of heterosocial skill. Results indicated that two measures, anxiety and response duration, displayed adequate internal consistency, while that of skill and response latency was marginal. Interresponse consistency was moderately low for all three test forms. Comparisons of mean performances across forms revealed significant differences, with heterosocially irrelevant items appearing easier, in general, than the heterosocial items of Forms A and B. Lastly, significant predictions of peer-reported heterosocial behavior were obtained for all three test forms, but two self-report criteria were not found to be related to test behavior. Various implications of these findings are discussed.Portions of this research were completed while the authors were affiliated with the University of Georgia, Athens, Georgia. The advice and consultation provided by the late William K. Boardman is gratefully acknowledged. Appreciation is also extended to Michael Breakwell, Linda Maertzweller, Steven Ray, and Jan Rockley for their assistance with data collection.Copies of assessment material and specific instructions to subjects are available upon request.  相似文献   

13.
A modern test that takes advantage of the opportunities provided by advancements in computer technology is the multimedia test. The purpose of this study was to investigate the criterion-related validity of a specific open-ended multimedia test, namely a webcam test, by means of a concurrent validity study. In a webcam test a number of work-related situations are presented and participants have to respond as if these were real work situations. The responses are recorded with a webcam. The aim of the webcam test which we investigated is to measure the effectiveness of social work behaviour. This first field study on a webcam test was conducted in an employment agency in The Netherlands. The sample consisted of 188 consultants who participated in a certification process. For the webcam test, good interrater reliabilities and internal consistencies were found. The results showed the webcam test to be significantly correlated with job placement success. The webcam test scores were also found to be related to job knowledge. Hierarchical regression analysis demonstrated that the webcam test has incremental validity up to and beyond job knowledge in predicting job placement success. The webcam test, therefore, seems a promising type of instrument for personnel selection.  相似文献   

14.
The effect of the proctor's familiarity on four groups of students in Grades 5 and 6 was investigated. The 137 children took a reading examination, half of which was administered by a familiar proctor, the other half by an unfamiliar one. Order of conditions was controlled. Analysis showed that students had significantly lower reading scores with the unfamiliar proctor. Students with midrange IQs had significantly lower reading scores than those in the low or high ranges. A significant relationship between test anxiety and effects of the unfamiliar proctor on test performance was shown. Test anxiety contributed significantly to the relationship between self-esteem and performance.  相似文献   

15.
This paper investigates whether test anxiety leads to differential predictive validity in academic performance. Our results show that the predictive validity of a cognitive ability test, using final exam performance as a criterion, decreased a small amount as Worry (the cognitive aspect of anxiety) increased but was unaffected by Emotionality (the physiological aspect of anxiety). These results suggest that cognitive ability tests may be more useful as predictors of performance for low anxiety test-takers. These findings are discussed in the context of the interference and deficit perspectives of test anxiety.  相似文献   

16.
This study explored the roles of religiousness and religious coping methods in predicting cognitive test anxiety. A convenience sample of 121 African-American students (97 females and 24 males) ranging in age from 18 to 39 (Mage?=?20.16), attending a historically Black university completed an online questionnaire assessing demographic information, religiousness, religious coping methods, and cognitive test anxiety. Results showed that negative religious coping methods were significant factors in predicting cognitive test anxiety. These relationships may be pertinent for understanding salient factors that influence cognitive test anxiety in African-American college students.  相似文献   

17.
Anxiety sensitivity (AS) is an established cognitive risk factor for anxiety disorders. In children and adolescents, AS is usually measured with the Childhood Anxiety Sensitivity Index (CASI). Factor analytic studies suggest that the CASI is comprised of 3 lower-order factors pertaining to Physical, Psychological and Social Concerns. There has been little research on the validity of these lower-order factors. We examined the concurrent and incremental validity of the CASI and its lower-order factors in a non-clinical sample of 349 children and adolescents. CASI scores predicted symptoms of DSM-IV anxiety disorder subtypes as measured by the Spence Children's Anxiety Scale (SCAS) after accounting for variance due to State-Trait Anxiety Inventory scores. CASI Physical Concerns scores incrementally predicted scores on each of the SCAS scales, whereas scores on the Social and Psychological Concerns subscales incrementally predicted scores on conceptually related symptom scales (e.g. CASI Social Concerns scores predicted Social Phobia symptoms). Overall, this study demonstrates that there is added value in measuring AS factors in children and adolescents.  相似文献   

18.
BackgroundThe suitability of driving simulators for the prediction of driving behaviour in road traffic has been able to be confirmed in respect of individual assessment parameters. However, there is a need for overarching approaches that take into account the interaction between various influencing factors in order to establish proof of validity. The aim of this study was to explore the validity of our driving simulator in respect of its ability to predict driving behaviour based on participants‘ observed driving errors and driver’s individual characteristics.Method41 healthy participants were assessed both in a Smart-Realo-Simulator and on the road. By means of linear modelling, the correlation between observed driving errors was investigated. In addition, the influence of self-reported and externally assessed driving behaviour as well as individual parameters (education and training; driving history) were analysed.ResultsBy including these factors, 58% of the variance could be explained. For observed driving errors, a relative validity was established. For self-reported and externally assessed driving behaviour, an absolute to relative validity emerged. The amount of time spent in education and training proved to have a significant influence on driving performance in the simulator, but not on the road.DiscussionIn general, our results confirmed the validity of our driving simulator with regard to observed and self-reported driving behaviour. It emerged that education and training as potential indicators of cognitive resources played a differential role regarding the study conditions. Since real road driving is considerably automated in experienced drivers, this result suggests that simulation-related behavioural regulation is challenged by additional cognitive demands as opposed to behavioural regulation extending to real road driving. However, the source of these additional cognitive demands remains currently elusive and may form the subject of future research.  相似文献   

19.
Despite recent interest in the practice of allowing job applicants to retest, surprisingly little is known about how retesting affects 2 of the most critical factors on which staffing procedures are evaluated: subgroup differences and criterion-related validity. We examined these important issues in a sample of internal candidates who completed a job-knowledge test for a within-job promotion. This was a useful context for these questions because we had job-performance data on all candidates (N = 403), regardless of whether they passed or failed the promotion test (i.e., there was no direct range restriction). We found that retest effects varied by subgroup, such that females and younger candidates improved more upon retesting than did males and older candidates. There also was some evidence that Black candidates did not improve as much as did candidates from other racial groups. In addition, among candidates who retested, their retest scores were somewhat better predictors of subsequent job performance than were their initial test scores (rs = .38 vs. .27). The overall results suggest that retesting does not negatively affect criterion-related validity and may even enhance it. Furthermore, retesting may reduce the likelihood of adverse impact against some subgroups (e.g., female candidates) but increase the likelihood of adverse impact against other subgroups (e.g., older candidates).  相似文献   

20.
The present study replicated and extended research concerning a recently suggested conceptual model of the underlying factors of dimension ratings in assessment centers (ACs) proposed by Hoffman, Melchers, Blair, Kleinmann, and Ladd that includes broad dimension factors, exercise factors, and a general performance factor. We evaluated the criterion-related validity of these different components and expanded their nomological network. Results showed that all components (i.e., broad dimensions, exercises, general performance) were significant predictors of training performance. Furthermore, broad dimensions showed incremental validity beyond exercises and general performance. Finally, relationships between the AC factors and individual difference constructs (e.g., Big Five, core self-evaluations, positive and negative affectivity) supported the construct-related validity of broad dimensions and provided further insights in the nature of the different AC components.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号