首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Faking is a common problem in testing with self‐report personality tests, especially in high‐stakes situations. A possible way to correct for it is statistical control on the basis of social desirability scales. Two such scales were developed and applied in the present paper. It was stressed that the statistical models of faking need to be adapted to different properties of the personality scales, since such scales correlate with faking to different extents. In four empirical studies of self‐report personality tests, correction for faking was investigated. One of the studies was experimental, and asked participants to fake or to be honest. In the other studies, job or school applicants were investigated. It was found that the approach to correct for effects of faking in self‐report personality tests advocated in the paper removed a large share of the effects, about 90%. It was found in one study that faking varied as a function of degree of how important the consequences of test results could be expected to be, more high‐stakes situations being associated with more faking. The latter finding is incompatible with the claim that social desirability scales measure a general personality trait. It is concluded that faking can be measured and that correction for faking, based on such measures, can be expected to remove about 90% of its effects.  相似文献   

2.
企业人才甄选情境下求职者很容易在人格测验中作假。至今有关作假的研究已包含作假的内涵、来源和识别等多个方面,也诞生了多种心理模型尝试解释作假产生的心理机制,如作假动机与作假能力交互作用理论、作假计划行为理论、作假整合模型、一般作假行为模型以及作假的VIE模型,为后续理论研究点明方向。此外,作假应用领域中新兴的网络人格测验作假受到关注,在此介绍网络与纸笔测验两种形式下,人格测验作假行为、作假意向的不同。  相似文献   

3.
Effects of the testing situation on item responding: cause for concern   总被引:6,自引:0,他引:6  
The effects of faking on personality test scores have been studied previously by comparing (a) experimental groups instructed to fake or answer honestly, (b) subgroups created from a single sample of applicants or nonapplicants by using impression management scores, and (c) job applicants and nonapplicants. In this investigation, the latter 2 methods were used to study the effects of faking on the functioning of the items and scales of the Sixteen Personality Factor Questionnaire. A variety of item response theory methods were used to detect differential item/test functioning, interpreted as evidence of faking. The presence of differential item/test functioning across testing situations suggests that faking adversely affects the construct validity of personality scales and that it is problematic to study faking by comparing groups defined by impression management scores.  相似文献   

4.
Researchers have recently asserted that popular measures of response distortion (i.e., socially desirable responding scales) lack construct validity (i.e., measure traits rather than test faking) and that applicant faking on personality tests remains a serious concern ( [Griffith and Peterson, 2008] and [Holden, 2008]). Thus, although researchers and human resource (HR) selection specialists have been attempting to find measures which readily capture individual differences in faking that increase personality test validity, to date such attempts have rarely, if ever succeeded. The current study, however, finds that the overclaiming technique captures individual differences in faking and subsequently increases personality test score validity via suppressing unwanted error variance in personality test scores. Implications of this research on the overclaiming technique for improving HR selection decisions are illustrated and discussed.  相似文献   

5.
This study examined the extent to which the validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) could identify subjects who were faking bad or faking good and differentiate between psychiatric patients and normal subjects who were faking bad. Subjects were 106 undergraduate college students and 50 psychiatric patients. Results indicate that the mean profiles and optimal cutoff scores resembled those previously reported for the original MMPI. Accurate identification of persons who were faking bad or faking good was achieved. It was possible to differentiate between the psychiatric patients and normal persons who were faking bad, but different cutoff scores were needed to differentiate between normals taking the test under standard instructions and those instructed to fake bad. Optimal cutoff scores were suggested.  相似文献   

6.
The present research tested a model that integrated the theory of planned behavior (TPB) with a model of faking presented by McFarland and Ryan (2000) to predict faking on a personality test. In Study 1, the TPB explained sizable variance in the intention to fake. In Study 2, the TPB explained both the intention to fake and actual faking behavior. Different faking measures (i.e., difference scores and social desirability scales) tended to yield similar conclusions, but the difference scores were more strongly related to the variables in the model. These results provide support for a model that may increase understanding of applicant faking behavior and suggest reasons for the discrepancies in past research regarding the prevalence and consequences of faking.  相似文献   

7.
This study examined the extent to which the validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) could identify subjects who were faking bad or faking good and differentiate between psychiatric patients and normal subjects who were faking bad. Subjects were 106 undergraduate college students and 50 psychiatric patients. Results indicate that the mean profiles and optimal cutoff scores resembled those previously reported for the original MMPI. Accurate identification of persons who were faking bad or faking good was achieved. It was possible to differentiate between the psychiatric patients and normal persons who were faking bad, but different cutoff scores were needed to differentiate between normals taking the test under standard instructions and those instructed to fake bad. Optimal cutoff scores were suggested.  相似文献   

8.
Recent years have shown increased awareness of the importance of personality tests in educational, clinical, and occupational settings, and developing faking-resistant personality tests is a very pragmatic issue for achieving more precise measurement. Inspired by Stark (2002) and Stark, Chernyshenko, and Drasgow (2005), we develop a pairwise preference-based personality test that aims to measure multidimensional personality traits using a large-scale statement bank. An experiment compares the resistance of the developed personality test to faking with that of rating scale-based personality tests in the item response theory model framework. Results show that latent traits estimated from the personality test based on the rating scale method are severely biased, and that faking effect can be pragmatically ignored in the personality test developed based on the pairwise preference method.  相似文献   

9.
作假普遍存在于人事选拔各个阶段,并对最终选拔结果造成影响.研究者对于作假的内涵界定有较大差异,主要是由于研究者对作假结构、变异来源和作假水平有不同理解.根据不同作假定义可衍生出多种作假测量方法,常用的有基线差值法、认知模式法、嵌入量表法和行为模式法四类.从测量指标、次数和内容三个方面分析归纳这四类测量方法,其作假识别效用与选拔中作假测量的可行性各异.今后的研究应完善现有作假测量方法,开发作假动机测量工具,加强作假的过程性控制研究,并深入探索作假的个体差异.  相似文献   

10.
We studied the effects of faking biodata test items by randomly warning 214 of 429 applicants for a nurse's assistant position against faking. While the warning mitigated the propensity to fake, the specific warning effects depended on item transparency. For transparent items, warning reduced the extremeness of item means and increased item variances. For nontransparent items, warning did not have an effect on item means and reduced item variances. These faking effects were best predicted when transparency was operationalized in terms of item-specific job desirability in addition to the item-general social desirability. We also demonstrated a psychometric principle: The effect of warning on means at the item level is preserved in scales constructed from those items, but the effect on variances at the item level is masked at the scale level. These results raise new questions regarding the attenuating effects of faking on validity, and regarding the benefit of warning applicants against faking.  相似文献   

11.
To examine the impact of Internet-based information about how to simulate being mentally healthy on the Rorschach (Exner, 2003) and the MMPI–2 (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), 87 psychiatric outpatients completed the tests under 4 conditions: uncoached and Internet-coached outpatients under faking healthy instructions (faking patients and Internet-faking patients) and patients and nonpatients under standard instructions (standard patients and standard nonpatients). On the Rorschach, faking patients and Internet-faking patients did not manage to portray healthy test performance and, like standard patients, revealed a significantly greater number of perceptual and cognitive disturbances than standard nonpatients. Faking patients scored in the psychopathological direction on most variables. Internet-faking patients produced constricted protocols with significantly higher F% (57%) and lower use of provoking and aggressive contents than the other groups. On the MMPI–2, faking patients and Internet-faking patients were able to conceal symptoms and, like standard nonpatients, scored in the normal range on the clinical scales. The validity scale L successfully detected the faking patients and the Internet-faking patients, whereas the F scale only distinguished the Internet-faking patients and K only the faking patients. We conclude that Internet-based information could threaten test validity.  相似文献   

12.
职业选拔情境下人格测验作假研究   总被引:2,自引:0,他引:2  
在职业选拔情境下被试很容易对人格测验作假,从而制约了人格测验在企业中的应用。许多研究者在努力解决作假问题,分别就应聘者是否会作假,作假给人格测验带来的负面影响、应聘者如何作假以及如果应对作假等问题进行了深入的探讨。经过几十年的发展,该研究领域已经形成了包括实验诱导设计、已知群体设计和量表设计等几种特定的研究范式。研究结果显示,大多数应聘者会作假,但其负面影响并不严重;作假不同于社会称许性反应,它是一种工作称许性反应。目前的几种应对作假的方法尚存在一些问题,其有效性有待提高。总之,人格测验的作假作用明显,其研究难度较大,有待革新性理论和方法的出现  相似文献   

13.
Issues of reliability, item latent structure, and faking on the Holden Psychological Screening Inventory (HPSI), the Brief Symptom Inventory (BSI), and the Balanced Inventory of Desirable Responding (BIDR) were examined with a sample of 300 university undergraduates. Reliability analyses indicated that scales from all inventories had acceptable internal consistency. Confirmatory item principal component analyses supported the structures and scoring keys of the HPSI and the BIDR, but not the BSI. Although all inventories were susceptible to faking, validity indices of the HPSI and the BIDR could correctly classify over two-thirds of test respondents as either responding honestly or as faking.  相似文献   

14.
This study examines the stability of the response process and the rank-order of respondents responding to 3 personality scales in 4 different response conditions. Applicants to the University College of Teacher Education Styria (N = 243) completed personality scales as part of their college admission process. Half a year later, they retook the same personality scales in 1 of 3 randomly assigned experimental response conditions: honest, faking-good, or reproduce. Longitudinal means and covariance structure analyses showed that applicants' response processes could be partially reproduced after half a year, and respondents seemed to rely on an honest response behavior as a frame of reference. Additionally, applicants' faking behavior and instructed faking (faking-good) caused differences in the latent retest correlations and consistently affected measurement properties. The varying latent retest correlations indicated that faking can distort respondents' rank-order and thus the fairness of subsequent selection decisions, depending on the kind of faking behavior. Instructed faking (faking-good) even affected weak measurement invariance, whereas applicants' faking behavior did not. Consequently, correlations with personality scales—which can be utilized for predictive validity—may be readily interpreted for applicants. Faking behavior also introduced a uniform bias, implying that the classically observed mean raw score differences may not be readily interpreted.  相似文献   

15.
应聘情境下作假识别量表的开发   总被引:2,自引:0,他引:2  
骆方  刘红云  张月 《心理学报》2010,42(7):791-801
在应聘情境中, 被试容易对人格测验作假。应对作假的常用方法是采用社会称许性量表对作假直接测量, 再去校正和识别作假效应。但是采用社会称许性量表测量作假存在很多问题, 因而基于作假的特殊性质开发了《作假识别量表》。采用探索性因素分析证实了量表的单维性, 解释率为54.650%。概化理论检验表明测验信度较好, G系数为0.906, j系数为0.902。采用一个真实的应聘情境检验效度, 发现《作假识别量表》对作假更加敏感, 能够比较充分地测量作假。  相似文献   

16.
Quasi-ipsative (QI) forced-choice response formats are often recommended over single-stimulus (SS) as a method to reduce applicant faking. Across three studies we developed and tested a QI version of the RIASEC occupational interests scale. The first study established acceptable reliability and validity of the QI version. The second and third studies tested the efficacy of the QI version for faking prevention in simulated job applicant scenarios. The results revealed that although the QI and SS formats were similarly fakable for the primary targeted interest, faking was limited for the secondary target on the QI version. Future research should identify the specific contexts in which QI prevents faking on various individual differences measures to allow for accurate recommendations in applied settings.  相似文献   

17.
Different models of lying on personality scales make discrepant predictions on the association between faking and item response time. The current research investigated response time restriction as a method for reducing the influence of faking on personality scale validity. In 3 assessment simulations involving 540 university undergraduates responding to 2 common, psychometrically strong personality inventories, no evidence emerged to indicate that limiting respondents' answering time can attenuate the effects of faking on validity. Results were interpreted as failing to support a simple model of personality test item response dissimulation that predicts that lying takes time. Findings were consistent with models implying that lying involves primitive cognitive processing or that lying may be associated with complex processing that includes both primitive responding and cognitive overrides.  相似文献   

18.
Recent studies have pointed to within-subjects designs as an especially effective tool for gauging the occurrence of faking behavior in applicant samples. The current study utilized a within-subjects design and data from a sample of job applicants to compare estimates of faking via within-subjects score change to estimates based on a social desirability scale. In addition, we examined the impact of faking on the relationship between Conscientiousness and counterproductive work behaviors (CWBs), as well as the direct linkage between faking and CWBs. Our results suggest that social desirability scales are poor indicators of within-subjects score change, and applicant faking is both related to CWBs and has a negative impact on the criterion-related validity of Conscientiousness as a predictor of CWBs.  相似文献   

19.
Faking on personality assessments remains an unsolved issue, raising major concerns regarding their validity and fairness. Although there is a large body of quantitative research investigating the response process of faking on personality assessments, for both rating scales (RS) and multidimensional forced choice (MFC), only a few studies have yet qualitatively investigated the faking cognitions when responding to MFC in a high-stakes context (e.g., Sass et al., 2020). Yet, it could be argued that only when we have a process model that adequately describes the response decisions in high stakes, can we begin to extract valid and useful information from assessments. Thus, this qualitative study investigated the faking cognitions when responding to MFC personality assessment in a high-stakes context. Through cognitive interviews with N = 32 participants, we explored and identified factors influencing the test-takers' decisions regarding specific items and blocks, and factors influencing the willingness to engage in faking in general. Based on these findings, we propose a new response process model of faking forced-choice items, the Activate-Rank-Edit-Submit (A-R-E-S) model. We also make four recommendations for practice of high-stakes assessments using MFC.  相似文献   

20.
Although personality tests are widely used to select applicants for a variety of jobs, there is concern that such measures are fakable. One procedure used to minimize faking has been to disguise the true intent of personality tests by randomizing items such that items measuring similar constructs are dispersed throughout the test. In this study, we examined if item placement does influence the fakability and psychometric properties of a personality measure. Study participants responded to 1 of 2 formats (random vs. grouped items) of a personality test honestly and also under instructions to fake or to behave like an applicant. Results indicate that the grouped item placement format was more fakable for the Neuroticism and Conscientiousness scales. The test with items randomly placed fit the data better within the honest and applicant conditions. These findings demonstrate that the issue of item placement should be seriously considered before administering personality measures because different item presentations may affect the incidence of faking and the psychometric properties of the measure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号