首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The potential for faking on noncognitive measures in high stakes testing situations remains a concern for many selection researchers and practitioners. However, the majority of previous research examining the practical effects of faking on noncognitive assessments has focused on these measures in isolation, rather than the more common situation in which they are used in combination with other predictors. The present simulation examined the effects of faking on a conscientiousness measure on criterion‐related validity, mean performance of those selected, and selection decision consistency when hiring decisions were based on this measure alone vs in combination with two other predictors across a range of likely selection scenarios. Overall, results indicated that including additional predictors substantially reduced – but did not eliminate – the negative effects of faking. Faking effects varied across outcomes and selection scenarios, with effects ranging from trivial to noteworthy even for multiple‐predictor selection. Implications for future research and practice are discussed.  相似文献   

2.
Intentional response distortion or faking among job applicants completing measures such as personality and integrity tests is a concern in personnel selection. The present study aimed to investigate whether eye-tracking technology can improve our understanding of the response process when faking. In an experimental within-participants design, a Big Five personality test and an integrity measure were administered to 129 university students in 2 conditions: a respond honestly and a faking good instruction. Item responses, response latencies, and eye movements were measured. Results demonstrated that all personality dimensions were fakeable. In support of the theoretical position that faking involves a less cognitively demanding process than responding honestly, we found that response times were on average 0.25 s slower and participants had less eye fixations in the fake good condition. However, in the fake good condition, participants had more fixations on the 2 extreme response options of the 5-point answering scale, and they fixated on these more directly after having read the question. These findings support the idea that faking leads to semantic rather than self-referenced item interpretations. Eye-tracking was demonstrated to be potentially useful in detecting faking behavior, improving detecting rates over and beyond response extremity and latency metrics.  相似文献   

3.
Abstract— Although previous research has indicated that faking can affect integrity test scores, the effects of coaching on integrity test scores have never been examined We conducted a between-subjects experiment to assess the effects of coaching and faking instructions on an overt and a covert integrity test Coaching provided simple rules to follow when answering test items and instructions on how to avoid elevated validity scale scores There were five instruction conditions "just take,""fake good,""coach overt,""coach covert," and coach both All subjects completed both overt and covert tests and a measure of intelligence Results provided strong evidence for the coachability of the overt integrity test, over and above the much smaller elevation in the faking condition The covert test apparently could be neither coached nor faked successfully Scores on both integrity tests tended to be positively correlated with intelligence in the coaching and faking conditions We discuss the generalizability of these results to other samples and other integrity tests, and the relevance of the coachability of integrity tests to the ongoing debate concerning the prediction of counterproductive behavior  相似文献   

4.
This study provides a partial test of the model of faking proposed by McFarland and Ryan (2000) McFarland, L. A. and Ryan, A. M. 2000. Variance in faking across noncognitive measures.. Journal of Applied Psychology, 85(5): 812821. [Crossref], [Web of Science ®] [Google Scholar] by examining the degree to which self-monitoring, knowledge of the constructs being measured, job familiarity, and openness to ideas account for variance in ability to fake on a personality measure. Undergraduates (N = 342) completed a modified version of the NEO Personality Inventory–Revised under both honest and faking instructions. In addition, some participants were asked to “fake good” and others were asked to fake toward the requirements of a specific job (i.e., accountant). Consistent with prior research, the fake good manipulation was found to increase scores on 8 of the 9 personality variables. In contrast, the fake accountant manipulation resulted in a personality profile consistent with a priori hypotheses in which some personality scores were significantly increased whereas others were significantly decreased. Results also revealed that the individual difference variables explained a significant portion of the variance in the ability to fake and that the two faking conditions produced different relations with these individual difference variables. As hypothesized, the participants' openness to ideas was a significant predictor of the ability to fake like an accountant (β = .22) but not the ability to fake good. In contrast, the participants' knowledge of the constructs being measured was a significant predictor of the ability to fake good (β = .18) but not the ability to fake like an accountant. We conclude by elaborating on the importance of using a hypothesis testing approach in the assessment of the relations between personality variables and job performance.  相似文献   

5.
In selection research and practice, there have been many attempts to correct scores on noncognitive measures for applicants who may have faked their responses somehow. A related approach with more impact would be identifying and removing faking applicants from consideration for employment entirely, replacing them with high-scoring alternatives. The current study demonstrates that under typical conditions found in selection, even this latter approach has minimal impact on mean performance levels. Results indicate about .1 SD change in mean performance across a range of typical correlations between a faking measure and the criterion. Where trait scores were corrected only for suspected faking, and applicants not removed or replaced, the minimal impact the authors found on mean performance was reduced even further. By comparison, the impact of selection ratio and test validity is much larger across a range of realistic levels of selection ratios and validities. If selection researchers are interested only in maximizing predicted performance or validity, the use of faking measures to correct scores or remove applicants from further employment consideration will produce minimal effects.  相似文献   

6.
The present research tested a model that integrated the theory of planned behavior (TPB) with a model of faking presented by McFarland and Ryan (2000) to predict faking on a personality test. In Study 1, the TPB explained sizable variance in the intention to fake. In Study 2, the TPB explained both the intention to fake and actual faking behavior. Different faking measures (i.e., difference scores and social desirability scales) tended to yield similar conclusions, but the difference scores were more strongly related to the variables in the model. These results provide support for a model that may increase understanding of applicant faking behavior and suggest reasons for the discrepancies in past research regarding the prevalence and consequences of faking.  相似文献   

7.
This research assessed whether warning subjects that faked responses could be detected would reduce the amount of faking that might occur when using a personality test for selection of police officers. Also, personality test subscales which best differentiated honest from dissimulated responses were determined. Subjects (N=120) were randomly assigned to a straight-take (that is, respond honestly), fake good, or modified fake good group. Both fake good groups were instructed to respond to the test so as to appear favourably for the job; additionally, the modified fake good group was warned that faking could be detected and could reduce hiring chances. Multivariate analyses revealed significant differences on the Denial and Deviation subscales between the three conditions (p <0.01). The pattern of differences suggested that the threat of faking detection reduced faking. Potential application of these findings in personnel selection was discussed.  相似文献   

8.
Treatment acceptability refers to how acceptable various treatment alternatives are to individuals who are subjected to and who implement those treatments. While treatment-acceptability research increases in popularity, some have questioned its usefulness. In particular, Schwartz and Baer (1991) question whether staff might be telling us what we want to hear, analogous to the phenomenon of test-takers “faking good” while taking personality tests. In this study, we sought to investigate the possibility of such bias in treatment-acceptability ratings. Direct-care staff at a large residential facility were presented with a clinical vignette and five treatment options to rate. They also received three different types of instructions (standard, “fake good,” and “prompted honesty”) designed to determine whether biases in ratings would appear. Results indicate that, under these conditions, staff do not fake good, i.e., there were no differences across instructional conditions. Collapsing across conditions, staff did differ in their ratings on the five treatment alternatives. Reasons for current results and suggestions for further research are discussed.  相似文献   

9.
A concern about personality inventories in diagnostic and decision-making contexts is that individuals will fake. Although there is extensive research on faking, little research has focused on how perceptions of personality items change when individuals are faking or responding honestly. This research demonstrates how the delta parameter from the generalized graded unfolding item response theory model can be used to examine how individuals’ perceptions about personality items might change when responding honestly or when faking. The results indicate that perceptions changed from honest to faking conditions for several neuroticism items. The direction of the change varied, indicating that faking can operate to increase or decrease scores within a personality factor.  相似文献   

10.
This study investigated the effectiveness of two recently developed measures of psychopathology—the Basic Personality Inventory (BPI) and the Millon Clinical Multiaxial (Inventory-II) (MCMI-II) in detecting dissimulation (i.e., faking good and faking bad). Both personality measures have developed special ‘validity scales’ to discern dissimulating responses. Ninety-one undergraduate students completed the two personality scales under one of three instructional sets: fake good, fake bad, and honest. In general, the results indicated that both scales were effective in distinguishing the groups from one another. The MCMI-II was better at detecting fake bad responding, while the BPI appeared to be more effective in detecting fake good responding. These differences in identifying fake good and fake bad response styles can be attributed to the method in which the scales were constructed.  相似文献   

11.
Effects of the testing situation on item responding: cause for concern   总被引:6,自引:0,他引:6  
The effects of faking on personality test scores have been studied previously by comparing (a) experimental groups instructed to fake or answer honestly, (b) subgroups created from a single sample of applicants or nonapplicants by using impression management scores, and (c) job applicants and nonapplicants. In this investigation, the latter 2 methods were used to study the effects of faking on the functioning of the items and scales of the Sixteen Personality Factor Questionnaire. A variety of item response theory methods were used to detect differential item/test functioning, interpreted as evidence of faking. The presence of differential item/test functioning across testing situations suggests that faking adversely affects the construct validity of personality scales and that it is problematic to study faking by comparing groups defined by impression management scores.  相似文献   

12.
企业人才甄选情境下求职者很容易在人格测验中作假。至今有关作假的研究已包含作假的内涵、来源和识别等多个方面,也诞生了多种心理模型尝试解释作假产生的心理机制,如作假动机与作假能力交互作用理论、作假计划行为理论、作假整合模型、一般作假行为模型以及作假的VIE模型,为后续理论研究点明方向。此外,作假应用领域中新兴的网络人格测验作假受到关注,在此介绍网络与纸笔测验两种形式下,人格测验作假行为、作假意向的不同。  相似文献   

13.
In a globalised world, more and more organisations have to select from pools of applicants from different cultures, often by using personality tests. If applicants from different cultures were to differ in the amount of faking on personality tests, this could threaten their validity: Applicants who engage in faking will have an advantage, and will put those who do not fake at a disadvantage. This is the first study to systematically examine and explain cross‐cultural differences in actual faking behavior. In N = 3,678 employees from 43 countries, a scenario‐based repeated measures design (faking vs. honest condition) was applied. Results showed that faking differed significantly across countries, and that it was systematically related to countries’ cultural characteristics (e.g. GLOBE's uncertainty avoidance, future orientation, humane orientation, and in‐group collectivism), but in an unexpected way. The study discusses these findings and their implications for research and practitioners.  相似文献   

14.
The faking-detection validity and incremental validity of response latencies to Minnesota Multiphasic Personality inventory (MMPI) items was investigated using an analog research design. One hundred undergraduates were assigned at random to five groups: each group received different faking instructions (standard, fake good, fake bad, fake good with incentive, fake bad with incentive). All subjects completed a computer-administered version of the MMPI. Content-determined response deviance scores and latencies of responses to Subtle and Obvious scale items were determined for each subject. The principal findings suggest that response latencies may have greater faking good detection ability than responses deviance scores, and that response latencies have statistically significant incremental validity for both the detection of faking good and faking bad, when latencies are used with response deviance scores obtained from Subtle and Obvious scales.  相似文献   

15.
Researchers are focusing on developing implicit measures of personality to address concerns related to the faking of self-report measures. The present study examined the validity and fakeability of Implicit Association Test (IAT) measures of personality self-concept in a repeated-measures design (N = 33). People’s predictions about how they represented themselves on the measures were also assessed. Results indicated that participants were able fake self-report measures when instructed to do so and that they could accurately predict how they represented themselves on these measures. Participants were also able to fake an IAT measure of Extraversion, but were unable to fake an IAT measure of Conscientiousness or predict how they represented themselves on either IAT measure.  相似文献   

16.
Because faking poses a threat to the validity of personality measures, research has focused on ways of detecting faking, including the use of response times. However, the applicability and validity of these approaches are dependent upon the actual cognitive process underlying faking. This study tested three competing cognitive models in order to identify the process underlying faking and to determine whether response time patterns are a viable method of detecting faking. Specifically, we used a within-subjects manipulation of instructions (respond honestly, make a good impression, make a specific impression) to examine whether the distribution of response times across response scale options (e.g., disagree, agree) could be used to identify faking on the NEO PI-R. Our results suggest that individuals reference a schema of an ideal respondent when faking. As a result, response time patterns such as the well-known inverted-U cannot be used to identify faking.  相似文献   

17.
Applicants may be willing to fake in job interviews with the aim of creating a positive impression. In two vignette‐based experiments, we examined if a competitive—versus noncompetitive—climate (Study 1) and hiring situation (Study 2) increased participants' willingness to fake. We also examined if Honesty–Humility and Competitive Worldviews moderated the relation between willingness to fake and how competitive participants believed they must be in order to secure the job. Results demonstrated that a competitive climate and hiring situation increased willingness to fake. Honesty–Humility and Competitive Worldviews were related to willingness to fake, but these relations did not change substantially at different levels of perceived need for competitiveness. Overall, results lend some theoretical support to propositions about applicant faking.  相似文献   

18.
Although personality tests are widely used to select applicants for a variety of jobs, there is concern that such measures are fakable. One procedure used to minimize faking has been to disguise the true intent of personality tests by randomizing items such that items measuring similar constructs are dispersed throughout the test. In this study, we examined if item placement does influence the fakability and psychometric properties of a personality measure. Study participants responded to 1 of 2 formats (random vs. grouped items) of a personality test honestly and also under instructions to fake or to behave like an applicant. Results indicate that the grouped item placement format was more fakable for the Neuroticism and Conscientiousness scales. The test with items randomly placed fit the data better within the honest and applicant conditions. These findings demonstrate that the issue of item placement should be seriously considered before administering personality measures because different item presentations may affect the incidence of faking and the psychometric properties of the measure.  相似文献   

19.
Two experiments examined recent claims of uncontrollability of the evaluative-priming effect. According to these claims, imposing an adaptive 600 ms response deadline prevents successful faking (Degner, 2009). Furthermore, strategic control attempts have been argued not to reduce the priming measure's sensitivity to spontaneous evaluations so that correlations of evaluative-priming effects with external criteria are not affected by attempts to fake (Bar-Anan, 2010). Here, we show that faking is possible even with an adaptive 600 ms response deadline when faking instructions do not conflict with speed pressures imposed thereby (Experiments 1 and 2). In addition, suitable faking instructions substantially affect the predictive validity of priming effects in terms of their correlations with (non-faked) self-report measures and the Implicit Association Test (Experiment 2). The previous claims about the uncontrollability of the evaluative-priming effect may thus have been premature.  相似文献   

20.
职业选拔情境下人格测验作假研究   总被引:2,自引:0,他引:2  
在职业选拔情境下被试很容易对人格测验作假,从而制约了人格测验在企业中的应用。许多研究者在努力解决作假问题,分别就应聘者是否会作假,作假给人格测验带来的负面影响、应聘者如何作假以及如果应对作假等问题进行了深入的探讨。经过几十年的发展,该研究领域已经形成了包括实验诱导设计、已知群体设计和量表设计等几种特定的研究范式。研究结果显示,大多数应聘者会作假,但其负面影响并不严重;作假不同于社会称许性反应,它是一种工作称许性反应。目前的几种应对作假的方法尚存在一些问题,其有效性有待提高。总之,人格测验的作假作用明显,其研究难度较大,有待革新性理论和方法的出现  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号