共查询到20条相似文献,搜索用时 15 毫秒
1.
Construct validity in psychological tests 总被引:68,自引:0,他引:68
2.
3.
4.
Cross-validation of the Halstead-Reitan tests for brain damage 总被引:2,自引:0,他引:2
5.
6.
Goldstein, Rennick, Welch, and Shelly (1973) developed a visual searching task (VST) that succeeded in obtaining a hit rate of 94.1% correct classifications when comparing brain-damaged and normal subjects and a 79.4% hit rate when comparing brain-damaged and psychiatric subjects. Goldstein and Kyc (1978) reported 92.5% correct classifications for the brain-damaged vs. normal comparisons and 82.5% correct classifications for brain-damaged vs. schizophrenic comparisons. We computerized the administration of the VST and found 85.7% correct classifications for the brain-damaged vs. normal groups and 71.4% correct classifications for the brain-damaged vs. psychiatric group. These results suggest that the computerized VST (CVST) is also a potentially valid indicator of brain damage. 相似文献
7.
The detection of malingering or symptom exaggeration has become an essential component in forensic neuropsychological evaluations, particularly in cases involving personal injury claims. Symptom Validity Tests refer to those measures that can be utilized to detect test performance that is so poor that it is below the level of probability, oftentimes even among brain-damaged populations. This article outlines legal standards for expert testimony in regard to forensic neuropsychological personal injury evaluations. The article provides an outline of specific Symptom Validity Tests and Indicators, and reviews literature supporting test sensitivity and validity. In addition, the use of symptom checklists and questionnaires is discussed, as well as the appropriate use of Symptom Validity Tests and Indicators to establish the presence or absence of malingering or symptom exaggeration. 相似文献
8.
9.
10.
11.
12.
13.
14.
15.
Van Iddekinge CH Roth PL Raymark PH Odle-Dusseau HN 《The Journal of applied psychology》2012,97(3):499-530
Integrity tests have become a prominent predictor within the selection literature over the past few decades. However, some researchers have expressed concerns about the criterion-related validity evidence for such tests because of a perceived lack of methodological rigor within this literature, as well as a heavy reliance on unpublished data from test publishers. In response to these concerns, we meta-analyzed 104 studies (representing 134 independent samples), which were authored by a similar proportion of test publishers and non-publishers, whose conduct was consistent with professional standards for test validation, and whose results were relevant to the validity of integrity-specific scales for predicting individual work behavior. Overall mean observed validity estimates and validity estimates corrected for unreliability in the criterion (respectively) were .12 and .15 for job performance, .13 and .16 for training performance, .26 and .32 for counterproductive work behavior, and .07 and .09 for turnover. Although data on restriction of range were sparse, illustrative corrections for indirect range restriction did increase validities slightly (e.g., from .15 to .18 for job performance). Several variables appeared to moderate relations between integrity tests and the criteria. For example, corrected validities for job performance criteria were larger when based on studies authored by integrity test publishers (.27) than when based on studies from non-publishers (.12). In addition, corrected validities for counterproductive work behavior criteria were larger when based on self-reports (.42) than when based on other-reports (.11) or employee records (.15). 相似文献
16.
17.
18.
19.
The validity of cognitive ability tests is often interpreted solely as a function of the cognitive abilities that these tests are supposed to measure, but other factors may be at play. The effects of test anxiety on the criterion related validity (CRV) of tests was the topic of a recent study by Reeve, Heggestad, and Lievens (2009) (Reeve, C. L., Heggestad, E. D., & Lievens, F. (2009). Modeling the impact of test anxiety and test familiarity on the criterion-related validity of cognitive ability tests. Intelligence, 37, 34?41.). They proposed a model on the basis of classical test theory, and concluded on the basis of data simulations that test anxiety typically decreases the CRV. In this paper, we view the effects of test anxiety on cognitive ability test scores and its implications for validity coefficients from the perspective of confirmatory factor analysis. We argue that CRV will be increased above the effect of targeted constructs if test anxiety affects both predictor and criterion performance. This prediction is tested empirically by considering convergent validity of subtests in five experimental studies of the effect of stereotype threat on test performance. Results show that the effects of test anxiety on cognitive test performance may actually enhance the validity of tests. 相似文献