首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The effects of correcting a personality measure for faking were evaluated within an organizational context. Two possible repercussions of score correction were studied using the 16PF personality inventory: the effect on criterion-related validity and the effect on individual hiring decisions (i.e., which applicants would or would not be hired). Results indicated that correction for faking had little effect on criterion-related validity. However, depending on the selection ratio, correction of scores would have resulted in different hiring decisions than those that would have been made on the basis of uncorrected scores. Implications for organizations using personality measures for selection and recommendations regarding the use of faking corrections are discussed.  相似文献   

2.
There are discrepant findings in the literature regarding the effects of applicant faking on the validity of noncognitive measures. One explanation for these mixed results may be the failure of some studies to consider individual differences in faking. This study demonstrates that there is considerable variance across individuals in the extent of faking 3 types of noncognitive measures (i.e., personality test, biodata inventory, and integrity test). Participants completed measures honestly and with instructions to fake. Results indicated some measures were more difficult to fake than others. The authors found that integrity, conscientiousness, and neuroticism were related to faking. In addition, individuals faked fairly consistently across the measures. Implications of these results and a model of faking that includes factors that may influence faking behavior are provided.  相似文献   

3.
Although there has been a steady growth in research and use of self‐report measures of personality in the last 20 years, faking in personality testing remains as a major concern. Blatant extreme responding (BER), which includes endorsing desirable extreme responses (i.e., 1 and 5 s), has recently been identified as a potential faking detection technique. In a large‐scale (N = 358,033), high‐stakes selection context, we investigate the construct validity of BER, the extent to which BER relates to general mental ability, and the extent to which BER differs across jobs, gender, and ethnic groups. We find that BER reflects applicant faking by showing that BER relates to a more established measure of faking, an unlikely virtue (UV) scale, and that applicants score higher than incumbents on BER. BER is (slightly) positively related to general mental ability whereas UV is negatively related to it. Applicants for managerial positions score slightly higher on BER than applicants for nonmanagerial positions. In addition, there was no gender or racial differences on BER. The implications of these findings for detecting faking in personnel selection are delineated.  相似文献   

4.
The potential for applicant response distortion on personality measures remains a major concern in high‐stakes testing situations. Many approaches to understanding response distortion are too transparent (e.g., instructed faking studies) – or are too subtle (e.g., correlations with social desirability measures as indices of faking). Recent research reveals more promising approaches in two methods: using forced‐choice (FC) personality test items and warning against faking. The present study examined effects of these two methods on criterion‐related validity and test‐taker reactions. Results supported incremental validity for an FC and Likert‐scale measure in warning and no‐warning conditions, above and beyond cognitive ability. No clear differences emerged between the FC vs Likert measures or warning vs no‐warning conditions in terms of validity. However, some evidence suggested that FC measures and warnings may produce negative test‐taker reactions. We conclude with implications for implementation in selection settings.  相似文献   

5.
Despite widespread and growing acceptance that published personality tests are valid predictors of job performance, Morgeson et al. (2007) propose they be abandoned in personnel selection because average validity estimates are low. Our review of the literature shows that Morgeson et al.'s skepticism is unfounded. Meta-analyses have demonstrated that published personality tests, in fact, yield useful validity estimates when validation is based on confirmatory research using job analysis and taking into account the bidirectionality of trait–performance linkages. Further gains are likely by use of narrow over broad measures, multivariate prediction, and theory attuned to the complexities of trait expression and evaluation at work. Morgeson et al. also suggest that faking has little, if any, impact on personality test validity and that it may even contribute positively to job performance. Job applicant research suggests that faking under true hiring conditions attenuates personality test validity but that validity is still sufficiently strong to warrant personality test use in hiring. Contrary to Morgeson et al., we argue that the full value of published personality tests in organizations has yet to be realized, calling for programmatic theory-driven research.  相似文献   

6.
This study investigated the effectiveness of two recently developed measures of psychopathology—the Basic Personality Inventory (BPI) and the Millon Clinical Multiaxial (Inventory-II) (MCMI-II) in detecting dissimulation (i.e., faking good and faking bad). Both personality measures have developed special ‘validity scales’ to discern dissimulating responses. Ninety-one undergraduate students completed the two personality scales under one of three instructional sets: fake good, fake bad, and honest. In general, the results indicated that both scales were effective in distinguishing the groups from one another. The MCMI-II was better at detecting fake bad responding, while the BPI appeared to be more effective in detecting fake good responding. These differences in identifying fake good and fake bad response styles can be attributed to the method in which the scales were constructed.  相似文献   

7.
Although there is an emerging consensus that social desirability does not meaningfully affect criterion-related validity, several researchers have reaffirmed the argument that social desirability degrades the construct validity of personality measures. Yet, most research demonstrating the adverse consequences of faking for construct validity uses a fake-good instruction set. The consequence of such a manipulation is to exacerbate the effects of response distortion beyond what would be expected under realistic circumstances (e.g., an applicant setting). The research reported in this article was designed to assess these issues by using real-world contexts not influenced by artificial instructions. Results suggest that response distortion has little impact on the construct validity of personality measures used in selection contexts.  相似文献   

8.
Although self‐report personality tests are a comparatively cheap and easy‐to‐administer personnel selection tool, researchers have criticized them for not predicting enough criterion‐related variance. Researchers have suggested using observer‐ratings of personality (e.g., as part of a reference check from a supervisor) because observer‐ratings have been reported to be more predictive. However, it is theoretically and empirically unclear whether supervisors also engage in faking (the intentional distortion of responses). Study 1 explored faking among managers who were first asked to imagine that a subordinate had to leave his/her job for private reasons and then to rate the personality of the subordinate. A week later, managers rated their subordinates honestly. A repeated‐measures MANOVA indicated that managers did fake. Study 2 (among supervisors of working students) replicated the above findings but also showed that there is less faking in supervisor‐ratings than in self‐ratings. Furthermore, we found no evidence that the validity of personality scales for predicting academic performance depends on self‐ versus observer‐ratings or on an applicant versus an honest condition. These two studies thus show that practitioners should not equate personality ratings obtained from observers in a selection context with honest personality ratings.  相似文献   

9.
We conducted two experimental studies with between-subjects and within-subjects designs to investigate the item response process for personality measures administered in high- versus low-stakes situations. Apart from assessing measurement validity of the item response process, we examined predictive validity; that is, whether or not different response models entail differential selection outcomes. We found that ideal point response models fit slightly better than dominance response models across high- versus low-stakes situations in both studies. Additionally, fitting ideal point models to the data led to fewer items displaying differential item functioning compared to fitting dominance models. We also identified several items that functioned as intermediate items in both the faking and honest conditions when ideal point models were fitted, suggesting that ideal point model is “theoretically” more suitable across these contexts for personality inventories. However, the use of different response models (dominance vs. ideal point) did not have any substantial impact on the validity of personality measures in high-stakes situations, or the effectiveness of selection decisions such as mean performance or percent of fakers selected. These findings are significant in that although prior research supports the importance and use of ideal point models for measuring personality, we find that in the case of personality faking, though ideal point models seem to have slightly better measurement validity, the use of dominance models may be adequate with no loss to predictive validity.  相似文献   

10.
Research has suggested the importance of applicants' expectations of forthcoming selection procedures in predicting how applicants react to selection procedures. Validated measures of selection expectations are still scarce, however. This study reports on the validation of the Applicant Expectation Survey (AES), intended to measure applicants' expectations of forthcoming selection procedures. The AES was validated using three military applicant samples and showed sound psychometric properties (i.e., reliability, measurement invariance, discriminant validity) for a five‐factorial oblique structure consisting of 26 items. The five factors (i.e., Warmth/respect, Chance to demonstrate potential, Difficulty of faking, Unbiased assessment, Feedback) were positively related to several organizational outcome measures and to applicants' perceptions of the selection procedure, providing evidence for the predictive validity of the AES.  相似文献   

11.
In a globalised world, more and more organisations have to select from pools of applicants from different cultures, often by using personality tests. If applicants from different cultures were to differ in the amount of faking on personality tests, this could threaten their validity: Applicants who engage in faking will have an advantage, and will put those who do not fake at a disadvantage. This is the first study to systematically examine and explain cross‐cultural differences in actual faking behavior. In N = 3,678 employees from 43 countries, a scenario‐based repeated measures design (faking vs. honest condition) was applied. Results showed that faking differed significantly across countries, and that it was systematically related to countries’ cultural characteristics (e.g. GLOBE's uncertainty avoidance, future orientation, humane orientation, and in‐group collectivism), but in an unexpected way. The study discusses these findings and their implications for research and practitioners.  相似文献   

12.
Three measures of response distortion (i.e., social desirability, covariance index, and implausible answers) were examined in both applicant and incumbent samples. Performance data, including supervisor ratings of task and contextual performance as well as objective performance criteria such as tardiness, work‐related accidents, and a customized work simulation, were obtained for the incumbent sample. Results provided further support for the existence of applicant faking behavior and shed light into the relationship between faking and job performance, largely depending on how one defines and measures faking as well as the performance criteria evaluated. Implications for future research and practice in personality assessment for selection purposes were discussed.  相似文献   

13.
The present research tested a model that integrated the theory of planned behavior (TPB) with a model of faking presented by McFarland and Ryan (2000) to predict faking on a personality test. In Study 1, the TPB explained sizable variance in the intention to fake. In Study 2, the TPB explained both the intention to fake and actual faking behavior. Different faking measures (i.e., difference scores and social desirability scales) tended to yield similar conclusions, but the difference scores were more strongly related to the variables in the model. These results provide support for a model that may increase understanding of applicant faking behavior and suggest reasons for the discrepancies in past research regarding the prevalence and consequences of faking.  相似文献   

14.
Because faking poses a threat to the validity of personality measures, research has focused on ways of detecting faking, including the use of response times. However, the applicability and validity of these approaches are dependent upon the actual cognitive process underlying faking. This study tested three competing cognitive models in order to identify the process underlying faking and to determine whether response time patterns are a viable method of detecting faking. Specifically, we used a within-subjects manipulation of instructions (respond honestly, make a good impression, make a specific impression) to examine whether the distribution of response times across response scale options (e.g., disagree, agree) could be used to identify faking on the NEO PI-R. Our results suggest that individuals reference a schema of an ideal respondent when faking. As a result, response time patterns such as the well-known inverted-U cannot be used to identify faking.  相似文献   

15.
IN SUPPORT OF PERSONALITY ASSESSMENT IN ORGANIZATIONAL SETTINGS   总被引:1,自引:0,他引:1  
Personality constructs have been demonstrated to be useful for explaining and predicting attitudes, behaviors, performance, and outcomes in organizational settings. Many professionally developed measures of personality constructs display useful levels of criterion-related validity for job performance and its facets. In this response to Morgeson et al. (2007) , we comprehensively summarize previously published meta-analyses on (a) the optimal and unit-weighted multiple correlations between the Big Five personality dimensions and behaviors in organizations, including job performance; (b) generalizable bivariate relationships of Conscientiousness and its facets (e.g., achievement orientation, dependability, cautiousness) with job performance constructs; (c) the validity of compound personality measures; and (d) the incremental validity of personality measures over cognitive ability. Hundreds of primary studies and dozens of meta-analyses conducted and published since the mid 1980s indicate strong support for using personality measures in staffing decisions. Moreover, there is little evidence that response distortion among job applicants ruins the psychometric properties, including criterion-related validity, of personality measures. We also provide a brief evaluation of the merits of alternatives that have been offered in place of traditional self-report personality measures for organizational decision making. Given the cumulative data, writing off the whole domain of individual differences in personality or all self-report measures of personality from personnel selection and organizational decision making is counterproductive for the science and practice of I-O psychology.  相似文献   

16.
This study investigated the psychological processes underlying interview faking, and that link personality to interview faking. In a sample of 198 recent interviewees, surveyed across three time points, we examined the mediating role of three constructs from the theory of planned behavior (TPB; i.e., attitudes, subjective norms, and perceived behavioral control) in explaining the relationship between the traits of Honesty–Humility and Conscientiousness and one form of interview faking (i.e., extensive image creation). Results indicated that all three TPB constructs correlated with interview faking, although only attitudes and subjective norms predicted faking incrementally. Attitudes and norms mediated the relationships between Honesty–Humility and Conscientiousness and interview faking. This study provides insight into interview faking, and the link between personality and interview faking.  相似文献   

17.
This research assessed whether warning subjects that faked responses could be detected would reduce the amount of faking that might occur when using a personality test for selection of police officers. Also, personality test subscales which best differentiated honest from dissimulated responses were determined. Subjects (N=120) were randomly assigned to a straight-take (that is, respond honestly), fake good, or modified fake good group. Both fake good groups were instructed to respond to the test so as to appear favourably for the job; additionally, the modified fake good group was warned that faking could be detected and could reduce hiring chances. Multivariate analyses revealed significant differences on the Denial and Deviation subscales between the three conditions (p <0.01). The pattern of differences suggested that the threat of faking detection reduced faking. Potential application of these findings in personnel selection was discussed.  相似文献   

18.
Faking on personality assessments remains an unsolved issue, raising major concerns regarding their validity and fairness. Although there is a large body of quantitative research investigating the response process of faking on personality assessments, for both rating scales (RS) and multidimensional forced choice (MFC), only a few studies have yet qualitatively investigated the faking cognitions when responding to MFC in a high-stakes context (e.g., Sass et al., 2020). Yet, it could be argued that only when we have a process model that adequately describes the response decisions in high stakes, can we begin to extract valid and useful information from assessments. Thus, this qualitative study investigated the faking cognitions when responding to MFC personality assessment in a high-stakes context. Through cognitive interviews with N = 32 participants, we explored and identified factors influencing the test-takers' decisions regarding specific items and blocks, and factors influencing the willingness to engage in faking in general. Based on these findings, we propose a new response process model of faking forced-choice items, the Activate-Rank-Edit-Submit (A-R-E-S) model. We also make four recommendations for practice of high-stakes assessments using MFC.  相似文献   

19.
Personality measures continue to be criticized for their susceptibility to faking and socially desirable responding. The present study examined the effects of warning applicants against faking on convergent validity of self-observer ratings. Four hundred sixty-four participants completed personality inventories in either a warned or unwarned condition. Results indicated that warning statements resulted in lower mean scores for some personality dimensions but did not improve convergent validity for any of these dimensions. Implications of these findings are discussed in relation to employment selection and future research.  相似文献   

20.
Although change scores in a measure administered under neutral and faking-motivating conditions have become a main choice to operationalize faking, there are still some non-resolved issues on the results they provide. The present study uses a two-wave two-group design with a control group to assess three of these issues: (a) the role of individual differences in the amount of faking-induced change, (b) the relation between Impression Management (IM) scores under neutral conditions and change scores, and (c) the convergent validity of change scores as a requisite to view them as measures of an individual-difference variable. A Spanish translation of the Eysenck Personality Questionnaire Revised was administered twice to 489 undergraduate students under standard-standard instructions (N = 215) and under standard-faking-good instructions (N = 274). For the P, N, and Lie scales, the results showed that the role of individual differences was very relevant and that the only common variable underlying the scores was a general factor of faking-induced change. However, the IM scores were unable to predict effective change.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号