首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Recent research has highlighted competitive worldviews as a key predictor of faking—the intentional distortion of answers by candidates in the selection context. According to theoretical assumptions, applicants’ abilities, and especially their cognitive abilities, should influence whether faking motivation, triggered by competitive worldviews, can be turned into successful faking behavior. Therefore, we examined the influence of competitive worldviews on faking in personality tests and investigated a possible moderation of this relationship by cognitive abilities in three independent high school and university student samples (N1 = 133, N2 = 137, N3 = 268). Our data showed neither an influence of the two variables nor of their interaction on faking behavior. We discuss possible reasons for these findings and give suggestions for further research.  相似文献   

2.
The present study evaluated the ability of item‐level bifactor models (a) to provide an alternative explanation to current theories of higher order factors of personality and (b) to explain socially desirable responding in both job applicant and non‐applicant contexts. Participants (46% male; mean age = 42 years, SD = 11) completed the 200‐item HEXACO Personality Inventory‐Revised either as part of a job application (n = 1613) or as part of low‐stakes research (n = 1613). A comprehensive set of invariance tests were performed. Applicants scored higher than non‐applicants on honesty‐humility (d = 0.86), extraversion (d = 0.73), agreeableness (d = 1.06), and conscientiousness (d = 0.77). The bifactor model provided improved model fit relative to a standard correlated factor model, and loadings on the evaluative factor of the bifactor model were highly correlated with other indicators of item social desirability. The bifactor model explained approximately two‐thirds of the differences between applicants and non‐applicants. Results suggest that rather than being a higher order construct, the general factor of personality may be caused by an item‐level evaluative process. Results highlight the importance of modelling data at the item‐level. Implications for conceptualizing social desirability, higher order structures in personality, test development, and job applicant faking are discussed. Copyright © 2017 European Association of Personality Psychology  相似文献   

3.
We examined the occurrence of faking on a rating situational judgment test (SJT) by comparing SJT scores and response styles of the same individuals across two naturally occurring situations. An SJT for medical school selection was administered twice to the same group of applicants (N = 317) under low‐stakes (T1) and high‐stakes (T2) circumstances. The SJT was scored using three different methods that were differentially affected by response tendencies. Applicants used significantly more extreme responding on T2 than T1. Faking (higher SJT score on T2) was only observed for scoring methods that controlled for response tendencies. Scoring methods that do not control for response tendencies introduce systematic error into the SJT score, which may lead to inaccurate conclusions about the existence of faking.  相似文献   

4.
We developed a supervised machine learning classifier to identify faking good by analyzing item response patterns of a Big Five personality self‐report. We used a between‐subject design, dividing participants (N = 548) into two groups and manipulated their faking behavior via instructions given prior to administering the self‐report. We implemented a simple classifier based on the Lie scale's cutoff score and several machine learning models fitted either to the personality scale scores or to the items response patterns. Results shown that the best machine learning classifier—based on the XGBoost algorithm and fitted to the item responses—was better at detecting faked profiles than the Lie scale classifier.  相似文献   

5.
We evaluated the validity of the Overclaiming Questionnaire (OCQ) as a measure of job applicants’ faking of personality tests. We assessed whether the OCQ (a) converged with an established measure of applicant faking, Residualized Individual Change Scores (RICSs); (b) predicted admission of faking and faking tendencies (Faking Frequency, Minimizing Weaknesses, Exaggerating Strengths, and Complete Misrepresentation); and, (c) predicted the aforementioned measures as strongly as RICSs did. First, 261 participants were instructed to respond honesty to an extraversion measure. Next, in a mock job application, they filled out the extraversion measure again, as well as the OCQ. The OCQ only weakly predicted RICSs (r = .17), Faking Admission (r = .18), and Faking Frequency (r = .15), and it failed to correlate significantly with Minimizing Weaknesses, Exaggerating Strengths, and Complete Misrepresentation. Moreover, the OCQ performed significantly worse than RICS in predicting Faking Admission, Faking Frequency, Minimizing Weaknesses, Exaggerating Strengths, and Complete Misrepresentation. We urge caution in using the current version of the OCQ to measure faking, but speculate that the innovative approach taken in the OCQ might be more effectively exploited if the OCQ content were tailored to the specific job that applicants are being tested for.  相似文献   

6.
Multiple frameworks and models postulate an effect of job interview preparation on faking. Two studies were conducted to examine if applicants’ interview preparation is correlated with higher faking. Besides analyzing the general extent of preparation, we also distinguished between different preparation categories. In Study 1 (N = 237), a presented preparation video led to higher intentions on image protection but did not increase overall faking intentions. Study 2 (N = 206) focused on past preparation and impression management (IM). The total time spent on preparation was positively correlated with faking. Applicants’ preparation via online videos and professional interview preparation was correlated with higher deceptive and honest IM. Preparation via online videos was additionally correlated with a higher perceived interview difficulty.  相似文献   

7.
Effects of the testing situation on item responding: cause for concern   总被引:6,自引:0,他引:6  
The effects of faking on personality test scores have been studied previously by comparing (a) experimental groups instructed to fake or answer honestly, (b) subgroups created from a single sample of applicants or nonapplicants by using impression management scores, and (c) job applicants and nonapplicants. In this investigation, the latter 2 methods were used to study the effects of faking on the functioning of the items and scales of the Sixteen Personality Factor Questionnaire. A variety of item response theory methods were used to detect differential item/test functioning, interpreted as evidence of faking. The presence of differential item/test functioning across testing situations suggests that faking adversely affects the construct validity of personality scales and that it is problematic to study faking by comparing groups defined by impression management scores.  相似文献   

8.
The impact of response distortion (faking) on selection decisions was investigated. Participants (N = 224) completed the NEO-PI-R under instructions to “make the most favorable impression” and/or “answer honestly.” Those instructed to fake were often over-represented at the top of the score distributions as instructions to fake resulted in higher scores both between and within groups in a test–retest situation. There was significantly lower correspondence between participants’ honest scores and their faked scores as well as multiple instances where participants with unfavorable honest scores subsequently produced the most favorable scores when faking. Response distortion may remain a serious threat to the use of personality test scores in selection.
Adrian ThomasEmail:
  相似文献   

9.
This study examines the stability of the response process and the rank-order of respondents responding to 3 personality scales in 4 different response conditions. Applicants to the University College of Teacher Education Styria (N = 243) completed personality scales as part of their college admission process. Half a year later, they retook the same personality scales in 1 of 3 randomly assigned experimental response conditions: honest, faking-good, or reproduce. Longitudinal means and covariance structure analyses showed that applicants' response processes could be partially reproduced after half a year, and respondents seemed to rely on an honest response behavior as a frame of reference. Additionally, applicants' faking behavior and instructed faking (faking-good) caused differences in the latent retest correlations and consistently affected measurement properties. The varying latent retest correlations indicated that faking can distort respondents' rank-order and thus the fairness of subsequent selection decisions, depending on the kind of faking behavior. Instructed faking (faking-good) even affected weak measurement invariance, whereas applicants' faking behavior did not. Consequently, correlations with personality scales—which can be utilized for predictive validity—may be readily interpreted for applicants. Faking behavior also introduced a uniform bias, implying that the classically observed mean raw score differences may not be readily interpreted.  相似文献   

10.
Companies and organizations use integrity tests to screen job applicants, and the fakability of these tests remains a concern. The present study uses two separate designs to analyze the fakability of the Personnel Reaction Blank (PRB) and the personality constructs related to integrity test scores. The results demonstrate that the PRB can be successfully faked. Moreover, a within-participants design resulted in significantly greater faking than the between-participants design. The personality constructs conscientiousness, agreeableness, and neuroticism were significantly correlated with honest scores on the PRB, and there was a significant negative correlation between conscientiousness and magnitude of faking.
Kevin A. ByleEmail:
  相似文献   

11.
Many companies recruit employees from different parts of the globe, and faking behavior by potential employees is a ubiquitous phenomenon. It seems that applicants from some countries are more prone to faking compared to others, but the reasons for these differences are largely unexplored. This study relates country-level economic variables to faking behavior in hiring processes. In a cross-national study across 20 countries, participants (N = 3,839) reported their faking behavior in their last job interview. This study used the random response technique (RRT) to ensure participants’ anonymity and to foster honest answers regarding faking behavior. Results indicate that general economic indicators (gross domestic product per capita [GDP] and unemployment rate) show negligible correlations with faking across the countries, whereas economic inequality is positively related to the extent of applicant faking to a substantial extent. These findings imply that people are sensitive to inequality within countries and that inequality relates to faking, because inequality might actuate other psychological processes (e.g., envy) which in turn increase the probability for unethical behavior in many forms.  相似文献   

12.
Two studies sought to determine personality and cognitive ability correlates of proof-reading. In both studies candidates were given 5 min to identify up to 55 errors in a 920 word, two page document. In Study 1, which tested 240 school children, fluid intelligence (as measured by the Baddeley Reasoning Test) was the highest correlate of proof-reading (r = .30). Eleven percent of the variance in total attempted scores was accounted for by intelligence, Introversion and low Conscientiousness. In the second study 70 undergraduates completed the same proof-reading test along with two intelligence tests (Baddeley Reasoning Test; Wonderlic Personnel Test) and a more robust personality measure (NEO-FFI). Proof-reading was correlated with both intelligence tests (Baddeley r = .45; Wonderlic r = .40). More of the variance was accounted for in the total attempted-score of errors than for a correct errors-detected score. When the two intelligence and five personality trait scores were regressed on to the proof-reading test score over a quarter of the variance (Adj R2 = .28) was accounted for, but only the Baddeley test was a significant predictor (Beta = .39).  相似文献   

13.
In a globalised world, more and more organisations have to select from pools of applicants from different cultures, often by using personality tests. If applicants from different cultures were to differ in the amount of faking on personality tests, this could threaten their validity: Applicants who engage in faking will have an advantage, and will put those who do not fake at a disadvantage. This is the first study to systematically examine and explain cross‐cultural differences in actual faking behavior. In N = 3,678 employees from 43 countries, a scenario‐based repeated measures design (faking vs. honest condition) was applied. Results showed that faking differed significantly across countries, and that it was systematically related to countries’ cultural characteristics (e.g. GLOBE's uncertainty avoidance, future orientation, humane orientation, and in‐group collectivism), but in an unexpected way. The study discusses these findings and their implications for research and practitioners.  相似文献   

14.
Evidence suggests that job applicants often “fake” on pre-employment personality tests by attempting to portray an exceedingly desirable impression in order to improve the likelihood of being selected. In the current research we shed light on the personality characteristics of those individuals who seem most likely to engage in faking. We refer to these personality variables as non-targeted traits when they are not directly targeted by the organization’s pre-employment personality test. These traits, however, may have an influence on targeted scores used for employment decision making through their effect on faking. Findings suggest that individuals will be more likely to be hired if they are low on non-targeted traits including Honesty–Humility, Integrity, and Morality, and high on Risk Taking. Such individuals also reported higher levels of workplace deviance in their current jobs. Thus, it seems that individuals low on Honesty–Humility, Integrity, and Morality, and individuals high on Risk Taking, may be most likely to engage in personality test faking, be hired, and participate in workplace deviant behaviors if these traits are not directly targeted in selection.  相似文献   

15.
Research on the role of intelligence in the capacity to fake personality tests tends to use the Big Five and g-factor models of personality and intelligence. The current study (N = 185 university students) examines instructed faking on the HEXACO and separately considers the role of fluid intelligence (Gf) and crystallized intelligence (Gc). Results demonstrate that: (a) participants can fake the HEXACO domains and facets to about the same extent as the Big Five; (b) faking has little effect on domain reliability but reduces facet reliability; and (c) faking involves Gc to a much greater degree than Gf. Results are discussed in terms of practical applications of facet scores, and process models of faking.  相似文献   

16.
This paper presents the results of three interrelated studies investigating the occurrence of response distortion on personality questionnaires within selection and the success of applicants in faking situations. In Study 1, comparison of the Big Five personality scores obtained from applicants in a military pilot cadet selection procedure with participants responding honestly, faking good, and faking an ideal candidate revealed that applicants responded more desirable than participants responding honestly but less desirable than respondents under fake instructions. The occurrence of faking within the military pilot selection process was replicated in Study 2 using the Eysenck Personality Questionnaire and another comparison group. Finally, in Study 3, comparison of personality profiles obtained in selection and ‘fake job’ situations with experts' estimates indicated that participants were partially successful in faking the desirable profile.  相似文献   

17.
Most faking research has examined the use of personality measures when using top-down selection. We used simulation to examine the use of personality measures in selection systems using cut scores and outlined a number of issues unique to these situations. In particular, we compared the use of 2 methods of setting cut scores on personality measures: applicant-data-derived (ADD) and nonapplicant-data-derived (NADD) cut-score strategies. We demonstrated that the ADD strategy maximized mean performance resulting from the selection system in the face of applicant faking but that this strategy also resulted in the displacement of deserving applicants by fakers (which has fairness implications). On the other hand, the NADD strategy minimized displacement of deserving applicants but at the cost of some mean performance. Therefore, the use of the ADD versus NADD strategies can be viewed as a strategic decision to be made by the organization, as there is a tradeoff between the 2 strategies in effects on performance versus fairness to applicants. We quantitatively outlined these tradeoffs at various selection ratios, levels of validity, and amounts of faking in the applicant pool.  相似文献   

18.
Indirect methods such as the implicit association test (IAT) could complement traditional self-report questionnaires of personality traits. However, it is unclear whether IAT scores and self-report scores of nominally the same personality trait measure the same construct or overlapping but distinct constructs. To investigate how IAT and self-report personality scores relate to each other, we conducted a web-based data collection where participants completed self-report personality questionnaires (n = 432) and IATs for extraversion (n = 393) and neuroticism (n = 385). We found that extraversion self-report and IAT scores were more strongly correlated with each other than corresponding neuroticism scores. Overall, our findings suggest that although extraversion and neuroticism self-report and implicit measures are related, they do measure distinct constructs.  相似文献   

19.
Although self‐rated or self‐scored selection measures are commonly used in selection contexts, they are potentially susceptible to applicant response distortion or faking. The response elaboration technique (RET), which requires job applicants to provide supporting information to justify their responses, has been identified as a potential way to minimize applicant response distortion. In a large‐scale, high‐stakes selection context (N= 16,304), we investigate the extent to which RET affects responding on a biodata test as well as the underlying reasons for any potential effect. We find that asking job applicants to elaborate their responses leads to overall lower scores on a biodata test. Item verifiability affects the extent to which RET decreases faking, which we suggest is due to increased accountability. In addition, verbal ability was more strongly related to biodata item scores when items require elaboration, although the effect of verbal ability was small. The implications of these findings for reducing faking in personnel selection are delineated.  相似文献   

20.
Many practitioners fear that applicants will fake if they are asked to fill out a personality test. Although this fear has inspired much research, it remains unknown what applicants think when they fill out a questionnaire. Thus, we conducted a qualitative interview study that was guided by grounded theory principles. We interviewed (a) real applicants directly after filling out a personality test; (b) real applicants who had filled out a personality test in their past job hunt; (c) hypothetical job applicants whom we asked to imagine being an applicant and to fill out a personality test; and (d) hypothetical applicants who had much experience with personality tests. Theoretical saturation was achieved after interviewing 23 people. A content analysis showed that much is going on in applicants' minds – that which is typically subsumed under the expression ‘faking’ actually consists of many facets. In particular, participants assumed that the interpretation of their responses could be based on (a) the consistency of their responses; (b) the endorsement of middle versus extreme answers; and (c) a certain profile, and these assumptions resulted in corresponding self‐presentation strategies. However, these strategies were not used by all participants. Some answered honestly, for different reasons ranging from honesty as a personality trait to the (false) belief that test administrators can catch fakers. All in all, this study questions whether measuring mean changes in classical faking studies captures all important facets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号