共查询到20条相似文献,搜索用时 15 毫秒
1.
Lennart Sjöberg 《Scandinavian journal of psychology》2015,56(5):582-591
Faking is a common problem in testing with self‐report personality tests, especially in high‐stakes situations. A possible way to correct for it is statistical control on the basis of social desirability scales. Two such scales were developed and applied in the present paper. It was stressed that the statistical models of faking need to be adapted to different properties of the personality scales, since such scales correlate with faking to different extents. In four empirical studies of self‐report personality tests, correction for faking was investigated. One of the studies was experimental, and asked participants to fake or to be honest. In the other studies, job or school applicants were investigated. It was found that the approach to correct for effects of faking in self‐report personality tests advocated in the paper removed a large share of the effects, about 90%. It was found in one study that faking varied as a function of degree of how important the consequences of test results could be expected to be, more high‐stakes situations being associated with more faking. The latter finding is incompatible with the claim that social desirability scales measure a general personality trait. It is concluded that faking can be measured and that correction for faking, based on such measures, can be expected to remove about 90% of its effects. 相似文献
2.
3.
Stark S Chernyshenko OS Chan KY Lee WC Drasgow F 《The Journal of applied psychology》2001,86(5):943-953
The effects of faking on personality test scores have been studied previously by comparing (a) experimental groups instructed to fake or answer honestly, (b) subgroups created from a single sample of applicants or nonapplicants by using impression management scores, and (c) job applicants and nonapplicants. In this investigation, the latter 2 methods were used to study the effects of faking on the functioning of the items and scales of the Sixteen Personality Factor Questionnaire. A variety of item response theory methods were used to detect differential item/test functioning, interpreted as evidence of faking. The presence of differential item/test functioning across testing situations suggests that faking adversely affects the construct validity of personality scales and that it is problematic to study faking by comparing groups defined by impression management scores. 相似文献
4.
Mark N. Bing Don KluemperH. Kristl Davison Shannon TaylorMilorad Novicevic 《Organizational behavior and human decision processes》2011,116(1):148-162
Researchers have recently asserted that popular measures of response distortion (i.e., socially desirable responding scales) lack construct validity (i.e., measure traits rather than test faking) and that applicant faking on personality tests remains a serious concern (
[Griffith and Peterson, 2008] and [Holden, 2008]). Thus, although researchers and human resource (HR) selection specialists have been attempting to find measures which readily capture individual differences in faking that increase personality test validity, to date such attempts have rarely, if ever succeeded. The current study, however, finds that the overclaiming technique captures individual differences in faking and subsequently increases personality test score validity via suppressing unwanted error variance in personality test scores. Implications of this research on the overclaiming technique for improving HR selection decisions are illustrated and discussed. 相似文献
5.
《Journal of personality assessment》2013,95(2):264-277
This study examined the extent to which the validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) could identify subjects who were faking bad or faking good and differentiate between psychiatric patients and normal subjects who were faking bad. Subjects were 106 undergraduate college students and 50 psychiatric patients. Results indicate that the mean profiles and optimal cutoff scores resembled those previously reported for the original MMPI. Accurate identification of persons who were faking bad or faking good was achieved. It was possible to differentiate between the psychiatric patients and normal persons who were faking bad, but different cutoff scores were needed to differentiate between normals taking the test under standard instructions and those instructed to fake bad. Optimal cutoff scores were suggested. 相似文献
6.
The present research tested a model that integrated the theory of planned behavior (TPB) with a model of faking presented by McFarland and Ryan (2000) to predict faking on a personality test. In Study 1, the TPB explained sizable variance in the intention to fake. In Study 2, the TPB explained both the intention to fake and actual faking behavior. Different faking measures (i.e., difference scores and social desirability scales) tended to yield similar conclusions, but the difference scores were more strongly related to the variables in the model. These results provide support for a model that may increase understanding of applicant faking behavior and suggest reasons for the discrepancies in past research regarding the prevalence and consequences of faking. 相似文献
7.
This study examined the extent to which the validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) could identify subjects who were faking bad or faking good and differentiate between psychiatric patients and normal subjects who were faking bad. Subjects were 106 undergraduate college students and 50 psychiatric patients. Results indicate that the mean profiles and optimal cutoff scores resembled those previously reported for the original MMPI. Accurate identification of persons who were faking bad or faking good was achieved. It was possible to differentiate between the psychiatric patients and normal persons who were faking bad, but different cutoff scores were needed to differentiate between normals taking the test under standard instructions and those instructed to fake bad. Optimal cutoff scores were suggested. 相似文献
8.
Recent years have shown increased awareness of the importance of personality tests in educational, clinical, and occupational settings, and developing faking-resistant personality tests is a very pragmatic issue for achieving more precise measurement. Inspired by Stark (2002) and Stark, Chernyshenko, and Drasgow (2005), we develop a pairwise preference-based personality test that aims to measure multidimensional personality traits using a large-scale statement bank. An experiment compares the resistance of the developed personality test to faking with that of rating scale-based personality tests in the item response theory model framework. Results show that latent traits estimated from the personality test based on the rating scale method are severely biased, and that faking effect can be pragmatically ignored in the personality test developed based on the pairwise preference method. 相似文献
9.
10.
BEYOND THE MEAN BIAS: THE EFFECT OF WARNING AGAINST FAKING ON BIODATA ITEM VARIANCES 总被引:1,自引:0,他引:1
We studied the effects of faking biodata test items by randomly warning 214 of 429 applicants for a nurse's assistant position against faking. While the warning mitigated the propensity to fake, the specific warning effects depended on item transparency. For transparent items, warning reduced the extremeness of item means and increased item variances. For nontransparent items, warning did not have an effect on item means and reduced item variances. These faking effects were best predicted when transparency was operationalized in terms of item-specific job desirability in addition to the item-general social desirability. We also demonstrated a psychometric principle: The effect of warning on means at the item level is preserved in scales constructed from those items, but the effect on variances at the item level is masked at the scale level. These results raise new questions regarding the attenuating effects of faking on validity, and regarding the benefit of warning applicants against faking. 相似文献
11.
To examine the impact of Internet-based information about how to simulate being mentally healthy on the Rorschach (Exner, 2003) and the MMPI–2 (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), 87 psychiatric outpatients completed the tests under 4 conditions: uncoached and Internet-coached outpatients under faking healthy instructions (faking patients and Internet-faking patients) and patients and nonpatients under standard instructions (standard patients and standard nonpatients). On the Rorschach, faking patients and Internet-faking patients did not manage to portray healthy test performance and, like standard patients, revealed a significantly greater number of perceptual and cognitive disturbances than standard nonpatients. Faking patients scored in the psychopathological direction on most variables. Internet-faking patients produced constricted protocols with significantly higher F% (57%) and lower use of provoking and aggressive contents than the other groups. On the MMPI–2, faking patients and Internet-faking patients were able to conceal symptoms and, like standard nonpatients, scored in the normal range on the clinical scales. The validity scale L successfully detected the faking patients and the Internet-faking patients, whereas the F scale only distinguished the Internet-faking patients and K only the faking patients. We conclude that Internet-based information could threaten test validity. 相似文献
12.
职业选拔情境下人格测验作假研究 总被引:2,自引:0,他引:2
在职业选拔情境下被试很容易对人格测验作假,从而制约了人格测验在企业中的应用。许多研究者在努力解决作假问题,分别就应聘者是否会作假,作假给人格测验带来的负面影响、应聘者如何作假以及如果应对作假等问题进行了深入的探讨。经过几十年的发展,该研究领域已经形成了包括实验诱导设计、已知群体设计和量表设计等几种特定的研究范式。研究结果显示,大多数应聘者会作假,但其负面影响并不严重;作假不同于社会称许性反应,它是一种工作称许性反应。目前的几种应对作假的方法尚存在一些问题,其有效性有待提高。总之,人格测验的作假作用明显,其研究难度较大,有待革新性理论和方法的出现 相似文献
13.
Issues of reliability, item latent structure, and faking on the Holden Psychological Screening Inventory (HPSI), the Brief Symptom Inventory (BSI), and the Balanced Inventory of Desirable Responding (BIDR) were examined with a sample of 300 university undergraduates. Reliability analyses indicated that scales from all inventories had acceptable internal consistency. Confirmatory item principal component analyses supported the structures and scoring keys of the HPSI and the BIDR, but not the BSI. Although all inventories were susceptible to faking, validity indices of the HPSI and the BIDR could correctly classify over two-thirds of test respondents as either responding honestly or as faking. 相似文献
14.
Georg Krammer Markus Sommer Martin E. Arendasy 《Journal of personality assessment》2017,99(5):510-523
This study examines the stability of the response process and the rank-order of respondents responding to 3 personality scales in 4 different response conditions. Applicants to the University College of Teacher Education Styria (N = 243) completed personality scales as part of their college admission process. Half a year later, they retook the same personality scales in 1 of 3 randomly assigned experimental response conditions: honest, faking-good, or reproduce. Longitudinal means and covariance structure analyses showed that applicants' response processes could be partially reproduced after half a year, and respondents seemed to rely on an honest response behavior as a frame of reference. Additionally, applicants' faking behavior and instructed faking (faking-good) caused differences in the latent retest correlations and consistently affected measurement properties. The varying latent retest correlations indicated that faking can distort respondents' rank-order and thus the fairness of subsequent selection decisions, depending on the kind of faking behavior. Instructed faking (faking-good) even affected weak measurement invariance, whereas applicants' faking behavior did not. Consequently, correlations with personality scales—which can be utilized for predictive validity—may be readily interpreted for applicants. Faking behavior also introduced a uniform bias, implying that the classically observed mean raw score differences may not be readily interpreted. 相似文献
15.
16.
Quasi-ipsative (QI) forced-choice response formats are often recommended over single-stimulus (SS) as a method to reduce applicant faking. Across three studies we developed and tested a QI version of the RIASEC occupational interests scale. The first study established acceptable reliability and validity of the QI version. The second and third studies tested the efficacy of the QI version for faking prevention in simulated job applicant scenarios. The results revealed that although the QI and SS formats were similarly fakable for the primary targeted interest, faking was limited for the secondary target on the QI version. Future research should identify the specific contexts in which QI prevents faking on various individual differences measures to allow for accurate recommendations in applied settings. 相似文献
17.
Different models of lying on personality scales make discrepant predictions on the association between faking and item response time. The current research investigated response time restriction as a method for reducing the influence of faking on personality scale validity. In 3 assessment simulations involving 540 university undergraduates responding to 2 common, psychometrically strong personality inventories, no evidence emerged to indicate that limiting respondents' answering time can attenuate the effects of faking on validity. Results were interpreted as failing to support a simple model of personality test item response dissimulation that predicts that lying takes time. Findings were consistent with models implying that lying involves primitive cognitive processing or that lying may be associated with complex processing that includes both primitive responding and cognitive overrides. 相似文献
18.
Mitchell H. Peterson Joshua A. Isaacson Matthew S. O'Connell Phillip M. Mangos 《人类行为》2013,26(3):270-290
Recent studies have pointed to within-subjects designs as an especially effective tool for gauging the occurrence of faking behavior in applicant samples. The current study utilized a within-subjects design and data from a sample of job applicants to compare estimates of faking via within-subjects score change to estimates based on a social desirability scale. In addition, we examined the impact of faking on the relationship between Conscientiousness and counterproductive work behaviors (CWBs), as well as the direct linkage between faking and CWBs. Our results suggest that social desirability scales are poor indicators of within-subjects score change, and applicant faking is both related to CWBs and has a negative impact on the criterion-related validity of Conscientiousness as a predictor of CWBs. 相似文献
19.
Faking on personality assessments remains an unsolved issue, raising major concerns regarding their validity and fairness. Although there is a large body of quantitative research investigating the response process of faking on personality assessments, for both rating scales (RS) and multidimensional forced choice (MFC), only a few studies have yet qualitatively investigated the faking cognitions when responding to MFC in a high-stakes context (e.g., Sass et al., 2020). Yet, it could be argued that only when we have a process model that adequately describes the response decisions in high stakes, can we begin to extract valid and useful information from assessments. Thus, this qualitative study investigated the faking cognitions when responding to MFC personality assessment in a high-stakes context. Through cognitive interviews with N = 32 participants, we explored and identified factors influencing the test-takers' decisions regarding specific items and blocks, and factors influencing the willingness to engage in faking in general. Based on these findings, we propose a new response process model of faking forced-choice items, the Activate-Rank-Edit-Submit (A-R-E-S) model. We also make four recommendations for practice of high-stakes assessments using MFC. 相似文献
20.
Although personality tests are widely used to select applicants for a variety of jobs, there is concern that such measures are fakable. One procedure used to minimize faking has been to disguise the true intent of personality tests by randomizing items such that items measuring similar constructs are dispersed throughout the test. In this study, we examined if item placement does influence the fakability and psychometric properties of a personality measure. Study participants responded to 1 of 2 formats (random vs. grouped items) of a personality test honestly and also under instructions to fake or to behave like an applicant. Results indicate that the grouped item placement format was more fakable for the Neuroticism and Conscientiousness scales. The test with items randomly placed fit the data better within the honest and applicant conditions. These findings demonstrate that the issue of item placement should be seriously considered before administering personality measures because different item presentations may affect the incidence of faking and the psychometric properties of the measure. 相似文献