首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Faking on personality assessments remains an unsolved issue, raising major concerns regarding their validity and fairness. Although there is a large body of quantitative research investigating the response process of faking on personality assessments, for both rating scales (RS) and multidimensional forced choice (MFC), only a few studies have yet qualitatively investigated the faking cognitions when responding to MFC in a high-stakes context (e.g., Sass et al., 2020). Yet, it could be argued that only when we have a process model that adequately describes the response decisions in high stakes, can we begin to extract valid and useful information from assessments. Thus, this qualitative study investigated the faking cognitions when responding to MFC personality assessment in a high-stakes context. Through cognitive interviews with N = 32 participants, we explored and identified factors influencing the test-takers' decisions regarding specific items and blocks, and factors influencing the willingness to engage in faking in general. Based on these findings, we propose a new response process model of faking forced-choice items, the Activate-Rank-Edit-Submit (A-R-E-S) model. We also make four recommendations for practice of high-stakes assessments using MFC.  相似文献   

2.
To reduce faking on personality tests, applicants may be warned that a social desirability scale is embedded in the test. Although this procedure has been shown to substantially reduce faking, there is no data that addresses how such a warning may influence applicant reactions toward the selection procedure or the relationships among personality constructs. Using an organizational justice framework, this study examines the effect of warning on procedural justice perceptions. Additionally, the extent to which warning changes the relationships among personality variables, socially desirable responding, and organizational justice variables, was explored. The results suggest that warning did not negatively affect test‐taker reactions. However, the relationships among the justice measures and the personality variables and socially desirable responding differed across the warned and unwarned groups. The organizational justice model fit best and there was less multicollinearity among the personality variables in the warned condition, compared to the unwarned condition. Thus, providing a warning appears to have positive consequences when using personality measures.  相似文献   

3.
应聘情境下作假识别量表的开发   总被引:2,自引:0,他引:2  
骆方  刘红云  张月 《心理学报》2010,42(7):791-801
在应聘情境中, 被试容易对人格测验作假。应对作假的常用方法是采用社会称许性量表对作假直接测量, 再去校正和识别作假效应。但是采用社会称许性量表测量作假存在很多问题, 因而基于作假的特殊性质开发了《作假识别量表》。采用探索性因素分析证实了量表的单维性, 解释率为54.650%。概化理论检验表明测验信度较好, G系数为0.906, j系数为0.902。采用一个真实的应聘情境检验效度, 发现《作假识别量表》对作假更加敏感, 能够比较充分地测量作假。  相似文献   

4.
Different models of lying on personality scales make discrepant predictions on the association between faking and item response time. The current research investigated response time restriction as a method for reducing the influence of faking on personality scale validity. In 3 assessment simulations involving 540 university undergraduates responding to 2 common, psychometrically strong personality inventories, no evidence emerged to indicate that limiting respondents' answering time can attenuate the effects of faking on validity. Results were interpreted as failing to support a simple model of personality test item response dissimulation that predicts that lying takes time. Findings were consistent with models implying that lying involves primitive cognitive processing or that lying may be associated with complex processing that includes both primitive responding and cognitive overrides.  相似文献   

5.
Faking is a common problem in testing with self‐report personality tests, especially in high‐stakes situations. A possible way to correct for it is statistical control on the basis of social desirability scales. Two such scales were developed and applied in the present paper. It was stressed that the statistical models of faking need to be adapted to different properties of the personality scales, since such scales correlate with faking to different extents. In four empirical studies of self‐report personality tests, correction for faking was investigated. One of the studies was experimental, and asked participants to fake or to be honest. In the other studies, job or school applicants were investigated. It was found that the approach to correct for effects of faking in self‐report personality tests advocated in the paper removed a large share of the effects, about 90%. It was found in one study that faking varied as a function of degree of how important the consequences of test results could be expected to be, more high‐stakes situations being associated with more faking. The latter finding is incompatible with the claim that social desirability scales measure a general personality trait. It is concluded that faking can be measured and that correction for faking, based on such measures, can be expected to remove about 90% of its effects.  相似文献   

6.
Intentional response distortion or faking among job applicants completing measures such as personality and integrity tests is a concern in personnel selection. The present study aimed to investigate whether eye-tracking technology can improve our understanding of the response process when faking. In an experimental within-participants design, a Big Five personality test and an integrity measure were administered to 129 university students in 2 conditions: a respond honestly and a faking good instruction. Item responses, response latencies, and eye movements were measured. Results demonstrated that all personality dimensions were fakeable. In support of the theoretical position that faking involves a less cognitively demanding process than responding honestly, we found that response times were on average 0.25 s slower and participants had less eye fixations in the fake good condition. However, in the fake good condition, participants had more fixations on the 2 extreme response options of the 5-point answering scale, and they fixated on these more directly after having read the question. These findings support the idea that faking leads to semantic rather than self-referenced item interpretations. Eye-tracking was demonstrated to be potentially useful in detecting faking behavior, improving detecting rates over and beyond response extremity and latency metrics.  相似文献   

7.
Personality measures continue to be criticized for their susceptibility to faking and socially desirable responding. The present study examined the effects of warning applicants against faking on convergent validity of self-observer ratings. Four hundred sixty-four participants completed personality inventories in either a warned or unwarned condition. Results indicated that warning statements resulted in lower mean scores for some personality dimensions but did not improve convergent validity for any of these dimensions. Implications of these findings are discussed in relation to employment selection and future research.  相似文献   

8.
This paper presents the results of three interrelated studies investigating the occurrence of response distortion on personality questionnaires within selection and the success of applicants in faking situations. In Study 1, comparison of the Big Five personality scores obtained from applicants in a military pilot cadet selection procedure with participants responding honestly, faking good, and faking an ideal candidate revealed that applicants responded more desirable than participants responding honestly but less desirable than respondents under fake instructions. The occurrence of faking within the military pilot selection process was replicated in Study 2 using the Eysenck Personality Questionnaire and another comparison group. Finally, in Study 3, comparison of personality profiles obtained in selection and ‘fake job’ situations with experts' estimates indicated that participants were partially successful in faking the desirable profile.  相似文献   

9.
We conducted two experimental studies with between-subjects and within-subjects designs to investigate the item response process for personality measures administered in high- versus low-stakes situations. Apart from assessing measurement validity of the item response process, we examined predictive validity; that is, whether or not different response models entail differential selection outcomes. We found that ideal point response models fit slightly better than dominance response models across high- versus low-stakes situations in both studies. Additionally, fitting ideal point models to the data led to fewer items displaying differential item functioning compared to fitting dominance models. We also identified several items that functioned as intermediate items in both the faking and honest conditions when ideal point models were fitted, suggesting that ideal point model is “theoretically” more suitable across these contexts for personality inventories. However, the use of different response models (dominance vs. ideal point) did not have any substantial impact on the validity of personality measures in high-stakes situations, or the effectiveness of selection decisions such as mean performance or percent of fakers selected. These findings are significant in that although prior research supports the importance and use of ideal point models for measuring personality, we find that in the case of personality faking, though ideal point models seem to have slightly better measurement validity, the use of dominance models may be adequate with no loss to predictive validity.  相似文献   

10.
Researchers have recently asserted that popular measures of response distortion (i.e., socially desirable responding scales) lack construct validity (i.e., measure traits rather than test faking) and that applicant faking on personality tests remains a serious concern ( [Griffith and Peterson, 2008] and [Holden, 2008]). Thus, although researchers and human resource (HR) selection specialists have been attempting to find measures which readily capture individual differences in faking that increase personality test validity, to date such attempts have rarely, if ever succeeded. The current study, however, finds that the overclaiming technique captures individual differences in faking and subsequently increases personality test score validity via suppressing unwanted error variance in personality test scores. Implications of this research on the overclaiming technique for improving HR selection decisions are illustrated and discussed.  相似文献   

11.
Effects of the testing situation on item responding: cause for concern   总被引:6,自引:0,他引:6  
The effects of faking on personality test scores have been studied previously by comparing (a) experimental groups instructed to fake or answer honestly, (b) subgroups created from a single sample of applicants or nonapplicants by using impression management scores, and (c) job applicants and nonapplicants. In this investigation, the latter 2 methods were used to study the effects of faking on the functioning of the items and scales of the Sixteen Personality Factor Questionnaire. A variety of item response theory methods were used to detect differential item/test functioning, interpreted as evidence of faking. The presence of differential item/test functioning across testing situations suggests that faking adversely affects the construct validity of personality scales and that it is problematic to study faking by comparing groups defined by impression management scores.  相似文献   

12.
Evidence suggests that job applicants often “fake” on pre-employment personality tests by attempting to portray an exceedingly desirable impression in order to improve the likelihood of being selected. In the current research we shed light on the personality characteristics of those individuals who seem most likely to engage in faking. We refer to these personality variables as non-targeted traits when they are not directly targeted by the organization’s pre-employment personality test. These traits, however, may have an influence on targeted scores used for employment decision making through their effect on faking. Findings suggest that individuals will be more likely to be hired if they are low on non-targeted traits including Honesty–Humility, Integrity, and Morality, and high on Risk Taking. Such individuals also reported higher levels of workplace deviance in their current jobs. Thus, it seems that individuals low on Honesty–Humility, Integrity, and Morality, and individuals high on Risk Taking, may be most likely to engage in personality test faking, be hired, and participate in workplace deviant behaviors if these traits are not directly targeted in selection.  相似文献   

13.
An experiment was conducted to investigate the effects of item order and questionnaire content on faking good or intentional response distortion. It was hypothesized that intentional response distortion would either increase towards the end of a long questionnaire, as learning effects might make it easier to adjust responses to a faking good schema, or decrease because applicants' will to distort responses is reduced if the questionnaire lasts long enough. Furthermore, it was hypothesized that certain types of questionnaire content are especially vulnerable to response distortion. Eighty‐four pre‐selected pilot applicants filled out a questionnaire consisting of 516 items including items from the NEO five factor inventory (NEO FFI), NEO personality inventory revised (NEO PI‐R) and business‐focused inventory of personality (BIP). The positions of the items were varied within the applicant sample to test if responses are affected by item order, and applicants' response behaviour was additionally compared to that of volunteers. Applicants reported significantly higher mean scores than volunteers, and results provide some evidence of decreased faking tendencies towards the end of the questionnaire. Furthermore, it could be demonstrated that lower variances or standard deviations in combination with appropriate (often higher) mean scores can serve as an indicator for faking tendencies in group comparisons, even if effects are not significant.  相似文献   

14.
This paper calls into question traditional methods of measuring the social desirability of items and their use in scale construction. First, we make explicit that the proper focus for desirability studies of items and traits are the rated desirabilities of the alternative item responses indicating different trait levels. Second, the results from our first study show that the relation between degree of endorsement of an item and its judged desirability level is often nonlinear and varies across items such that no general model of item desirability can be adopted that will accurately represent the relations across all items, traits, and trait levels. In addition, the nature of these relationships can vary depending on whether desirability is considered in a work or general context. Third, results from a second study indicate specifically that people when instructed to self-present in a maximally desirable manner will choose for some attributes a moderate level of endorsement (e.g., "agree") rather than a more extreme response option (e.g., "strongly agree"). Subjects offer several different reasons for viewing the less extreme response options, which yield more moderate trait level scores, as more desirable. These reasons are linked to perceptions of the more extreme response option as being associated with negative behaviors and concerns about how others will view a more extreme response to the item. Both studies indicate that desirable responding to personality items is more complex than previously believed.  相似文献   

15.
This study examines the stability of the response process and the rank-order of respondents responding to 3 personality scales in 4 different response conditions. Applicants to the University College of Teacher Education Styria (N = 243) completed personality scales as part of their college admission process. Half a year later, they retook the same personality scales in 1 of 3 randomly assigned experimental response conditions: honest, faking-good, or reproduce. Longitudinal means and covariance structure analyses showed that applicants' response processes could be partially reproduced after half a year, and respondents seemed to rely on an honest response behavior as a frame of reference. Additionally, applicants' faking behavior and instructed faking (faking-good) caused differences in the latent retest correlations and consistently affected measurement properties. The varying latent retest correlations indicated that faking can distort respondents' rank-order and thus the fairness of subsequent selection decisions, depending on the kind of faking behavior. Instructed faking (faking-good) even affected weak measurement invariance, whereas applicants' faking behavior did not. Consequently, correlations with personality scales—which can be utilized for predictive validity—may be readily interpreted for applicants. Faking behavior also introduced a uniform bias, implying that the classically observed mean raw score differences may not be readily interpreted.  相似文献   

16.
企业人才甄选情境下求职者很容易在人格测验中作假。至今有关作假的研究已包含作假的内涵、来源和识别等多个方面,也诞生了多种心理模型尝试解释作假产生的心理机制,如作假动机与作假能力交互作用理论、作假计划行为理论、作假整合模型、一般作假行为模型以及作假的VIE模型,为后续理论研究点明方向。此外,作假应用领域中新兴的网络人格测验作假受到关注,在此介绍网络与纸笔测验两种形式下,人格测验作假行为、作假意向的不同。  相似文献   

17.
Although there has been a steady growth in research and use of self‐report measures of personality in the last 20 years, faking in personality testing remains as a major concern. Blatant extreme responding (BER), which includes endorsing desirable extreme responses (i.e., 1 and 5 s), has recently been identified as a potential faking detection technique. In a large‐scale (N = 358,033), high‐stakes selection context, we investigate the construct validity of BER, the extent to which BER relates to general mental ability, and the extent to which BER differs across jobs, gender, and ethnic groups. We find that BER reflects applicant faking by showing that BER relates to a more established measure of faking, an unlikely virtue (UV) scale, and that applicants score higher than incumbents on BER. BER is (slightly) positively related to general mental ability whereas UV is negatively related to it. Applicants for managerial positions score slightly higher on BER than applicants for nonmanagerial positions. In addition, there was no gender or racial differences on BER. The implications of these findings for detecting faking in personnel selection are delineated.  相似文献   

18.
职业选拔情境下人格测验作假研究   总被引:2,自引:0,他引:2  
在职业选拔情境下被试很容易对人格测验作假,从而制约了人格测验在企业中的应用。许多研究者在努力解决作假问题,分别就应聘者是否会作假,作假给人格测验带来的负面影响、应聘者如何作假以及如果应对作假等问题进行了深入的探讨。经过几十年的发展,该研究领域已经形成了包括实验诱导设计、已知群体设计和量表设计等几种特定的研究范式。研究结果显示,大多数应聘者会作假,但其负面影响并不严重;作假不同于社会称许性反应,它是一种工作称许性反应。目前的几种应对作假的方法尚存在一些问题,其有效性有待提高。总之,人格测验的作假作用明显,其研究难度较大,有待革新性理论和方法的出现  相似文献   

19.
Because faking poses a threat to the validity of personality measures, research has focused on ways of detecting faking, including the use of response times. However, the applicability and validity of these approaches are dependent upon the actual cognitive process underlying faking. This study tested three competing cognitive models in order to identify the process underlying faking and to determine whether response time patterns are a viable method of detecting faking. Specifically, we used a within-subjects manipulation of instructions (respond honestly, make a good impression, make a specific impression) to examine whether the distribution of response times across response scale options (e.g., disagree, agree) could be used to identify faking on the NEO PI-R. Our results suggest that individuals reference a schema of an ideal respondent when faking. As a result, response time patterns such as the well-known inverted-U cannot be used to identify faking.  相似文献   

20.
Although personality tests are widely used to select applicants for a variety of jobs, there is concern that such measures are fakable. One procedure used to minimize faking has been to disguise the true intent of personality tests by randomizing items such that items measuring similar constructs are dispersed throughout the test. In this study, we examined if item placement does influence the fakability and psychometric properties of a personality measure. Study participants responded to 1 of 2 formats (random vs. grouped items) of a personality test honestly and also under instructions to fake or to behave like an applicant. Results indicate that the grouped item placement format was more fakable for the Neuroticism and Conscientiousness scales. The test with items randomly placed fit the data better within the honest and applicant conditions. These findings demonstrate that the issue of item placement should be seriously considered before administering personality measures because different item presentations may affect the incidence of faking and the psychometric properties of the measure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号