首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To identify faking, bifactor models were applied to Big Five personality data in three studies of laboratory and applicant samples using within‐subjects designs. The models were applied to homogenous data sets from separate honest, instructed faking, applicant conditions, and to simulated applicant data sets containing random individual responses from honest and faking conditions. Factor scores from the general factor in a bifactor model were found to be most highly related to response condition in both types of data sets. Domain factor scores from the faking conditions were found less affected by faking in measurement of Big Five domains than summated scale scores across studies. We conclude that bifactor models are efficacious in assessing the Big Five domains while controlling for faking.  相似文献   

2.
Job applicant faking, that is, consciously misrepresenting information during the selection process, is ubiquitous and is a threat to the usefulness of various selection tools. Understanding antecedents of faking is thus of utmost importance. Recent theories of faking highlight the central role of various forms of competition for understanding why faking occurs. Drawing on these theories, we suggest that the more applicants adhere to competitive worldviews (CWs), that is, the more they believe that the social world is a competitive, Darwinian‐type of struggle over scarce resources, the more likely they are to fake in employment interviews. We tested our hypothesis in three independent studies that were conducted in five different countries. Results show that CWs are strongly associated with faking, independently of job applicants’ cultural and economic context. More specifically, applicants’ CWs explain faking intentions and self‐reported past faking above and beyond the Dark Triad of personality (Study 1), competitiveness and the six facets of conscientiousness (Study 2). Also, when faking is measured using a response randomisation technique to control for social desirability, faking is more prevalent among applicants with strong vs. less strong CWs (Study 3). Taken together, this research demonstrates that competition is indeed strongly associated with undesirable applicant behaviors.  相似文献   

3.
Recent studies have pointed to within-subjects designs as an especially effective tool for gauging the occurrence of faking behavior in applicant samples. The current study utilized a within-subjects design and data from a sample of job applicants to compare estimates of faking via within-subjects score change to estimates based on a social desirability scale. In addition, we examined the impact of faking on the relationship between Conscientiousness and counterproductive work behaviors (CWBs), as well as the direct linkage between faking and CWBs. Our results suggest that social desirability scales are poor indicators of within-subjects score change, and applicant faking is both related to CWBs and has a negative impact on the criterion-related validity of Conscientiousness as a predictor of CWBs.  相似文献   

4.
The present research tested a model that integrated the theory of planned behavior (TPB) with a model of faking presented by McFarland and Ryan (2000) to predict faking on a personality test. In Study 1, the TPB explained sizable variance in the intention to fake. In Study 2, the TPB explained both the intention to fake and actual faking behavior. Different faking measures (i.e., difference scores and social desirability scales) tended to yield similar conclusions, but the difference scores were more strongly related to the variables in the model. These results provide support for a model that may increase understanding of applicant faking behavior and suggest reasons for the discrepancies in past research regarding the prevalence and consequences of faking.  相似文献   

5.
Although personality tests are widely used to select applicants for a variety of jobs, there is concern that such measures are fakable. One procedure used to minimize faking has been to disguise the true intent of personality tests by randomizing items such that items measuring similar constructs are dispersed throughout the test. In this study, we examined if item placement does influence the fakability and psychometric properties of a personality measure. Study participants responded to 1 of 2 formats (random vs. grouped items) of a personality test honestly and also under instructions to fake or to behave like an applicant. Results indicate that the grouped item placement format was more fakable for the Neuroticism and Conscientiousness scales. The test with items randomly placed fit the data better within the honest and applicant conditions. These findings demonstrate that the issue of item placement should be seriously considered before administering personality measures because different item presentations may affect the incidence of faking and the psychometric properties of the measure.  相似文献   

6.

Purpose

The purpose of this study was to examine the prevalence of applicant faking and its impact on the psychometric properties of the selection measure, the quality of hiring decisions, and employee performance.

Design/Methodology/Approach

This study utilized a within-subjects design where responses on a self-report measure were obtained for 162 individuals both when they applied for a pharmaceutical sales position, and after they were hired. Training performance data was collected at the completion of sales training and sales data was collected 5 months later.

Findings

Applicant faking was a common occurrence, with approximately half of the individuals being classified as a faker on at least one of the dimensions contained in the self-report measure. In addition, faking was found to negatively impact the psychometric properties of the selection measure, as well as the quality of potential hiring decisions made by the organization. Further, fakers exhibited lower levels of performance than non-fakers.

Implications

These findings indicate that past conclusions that applicant faking is either uncommon or does not negatively impact the selection system and/or organizational performance may be unwarranted.

Originality/Value

Remarkably few studies have examined applicant faking using a within-subjects design using actual job applicants, which has limited our understanding of applicant faking. Even fewer studies have attempted to link faking to criterion data to evaluate the impact of faking on employee performance. By utilizing this design and setting, the present study provides a unique glimpse into both the prevalence of faking and the significant impact faking can have on organizations.  相似文献   

7.
Despite its scientific and practical importance, relatively few studies have been conducted to investigate the relationship between job applicant mental abilities and faking. Some studies suggest that more intelligent people fake less because they do not have to. Other studies suggest that more intelligent people fake more because they have increased capacity to fake. Based on a model of faking likelihood, we predicted that job candidates with a high level of mental abilities would be less likely to fake a biodata measure. However, for candidates who did exhibit faking on the biodata measure, we expected there would be a strong positive relationship between mental abilities and faking, because mental abilities increase their capacity to fake. We found considerable support for hypotheses on a large sample of job candidates (N=17,368), using the bogus item technique to detect faking.  相似文献   

8.
《人类行为》2013,26(4):371-388
We evaluated the effects of faking on mean scores and correlations with self-reported counterproductive behavior of integrity-related personality items administered in single-stimulus and forced-choice formats. In laboratory studies, we found that respondents instructed to respond as if applying for a job scored higher than when given standard or "straight-take" instructions. The size of the mean shift was nearly a full standard deviation for the single-stimulus integrity measure, but less than one third of a standard deviation for the same items presented in a forced-choice format. The correlation between the personality questionnaire administered in the single-stimulus condition and self-reported workplace delinquency was much lower in the job applicant condition than in the straight-take condition, whereas the same items administered in the forced-choice condition maintained their substantial correlations with workplace delinquency.  相似文献   

9.
Many applicants use faking in interviews to present themselves more favorably than they really are. There is widespread concern that this may affect interview validity. As previous research on countermeasures is sparse, we conducted an exploratory study to identify the most promising countermeasures. For technology-mediated interviews, these were warnings referring to a criterion-based content analysis and lie detection algorithms focusing on nonverbal or paraverbal cues. For face-to-face interviews, these were objective questions and a personable interviewer. We then investigated the effects of these countermeasures on faking intentions in two experimental vignette studies and on faking in another simulated interview study. However, none of the countermeasures could reduce faking intentions or faking. Additionally, in the vignette studies, warnings impaired applicant reactions.  相似文献   

10.
There are discrepant findings in the literature regarding the effects of applicant faking on the validity of noncognitive measures. One explanation for these mixed results may be the failure of some studies to consider individual differences in faking. This study demonstrates that there is considerable variance across individuals in the extent of faking 3 types of noncognitive measures (i.e., personality test, biodata inventory, and integrity test). Participants completed measures honestly and with instructions to fake. Results indicated some measures were more difficult to fake than others. The authors found that integrity, conscientiousness, and neuroticism were related to faking. In addition, individuals faked fairly consistently across the measures. Implications of these results and a model of faking that includes factors that may influence faking behavior are provided.  相似文献   

11.
Forced-choice format tests have been suggested as an alternative to Likert-scale measures for personnel selection due to robustness to faking and response styles. This study compared degrees of faking occurring in Likert-scale and forced-choice five-factor personality tests between South Korea and the United States. Also, it was examined whether the forced-choice format was effective at reducing faking in both countries. Data were collected from 396 incumbents participating in both honest and applicant conditions (NSK = 179, NUS = 217). Cohen's d values for within-subjects designs (dswithin) for between the two conditions were utilized to measure magnitudes of faking occurring in each format and country. In both countries, the degrees of faking occurring in the Likert-scale were larger than those from the forced-choice format, and the magnitudes of faking across five personality traits were larger in South Korea by from 0.07 to 0.12 in dswithin. The forced-choice format appeared to successfully reduce faking for both countries as the average dswithin decreased by 0.06 in both countries. However, the patterns of faking occurring in the forced-choice format varied between the two countries. In South Korea, degrees of faking in Openness and Conscientiousness increased, whereas those in Extraversion and Agreeableness were substantially decreased. Potential factors leading to trait-specific faking under the forced-choice format were discussed in relation to cultural influence on the perception of personality traits and score estimation in Thurstonian item response theory (IRT) models. Finally, the adverse impact of using forced-choice formats on multicultural selection settings was elaborated.  相似文献   

12.
Three measures of response distortion (i.e., social desirability, covariance index, and implausible answers) were examined in both applicant and incumbent samples. Performance data, including supervisor ratings of task and contextual performance as well as objective performance criteria such as tardiness, work‐related accidents, and a customized work simulation, were obtained for the incumbent sample. Results provided further support for the existence of applicant faking behavior and shed light into the relationship between faking and job performance, largely depending on how one defines and measures faking as well as the performance criteria evaluated. Implications for future research and practice in personality assessment for selection purposes were discussed.  相似文献   

13.
We evaluated the validity of the Overclaiming Questionnaire (OCQ) as a measure of job applicants’ faking of personality tests. We assessed whether the OCQ (a) converged with an established measure of applicant faking, Residualized Individual Change Scores (RICSs); (b) predicted admission of faking and faking tendencies (Faking Frequency, Minimizing Weaknesses, Exaggerating Strengths, and Complete Misrepresentation); and, (c) predicted the aforementioned measures as strongly as RICSs did. First, 261 participants were instructed to respond honesty to an extraversion measure. Next, in a mock job application, they filled out the extraversion measure again, as well as the OCQ. The OCQ only weakly predicted RICSs (r = .17), Faking Admission (r = .18), and Faking Frequency (r = .15), and it failed to correlate significantly with Minimizing Weaknesses, Exaggerating Strengths, and Complete Misrepresentation. Moreover, the OCQ performed significantly worse than RICS in predicting Faking Admission, Faking Frequency, Minimizing Weaknesses, Exaggerating Strengths, and Complete Misrepresentation. We urge caution in using the current version of the OCQ to measure faking, but speculate that the innovative approach taken in the OCQ might be more effectively exploited if the OCQ content were tailored to the specific job that applicants are being tested for.  相似文献   

14.
The self-presentation tactics of candidates during job interviews and on personality inventories have been a focal topic in selection research. The current study investigated self-presentation across these two selection devices. Specifically, we examined whether candidates who use impression management (IM) tactics during an interview show more faking on a personality inventory and whether the relation to job performance is similar for both forms of self-presentation. Data were collected in a simulated selection process with an interview under applicant conditions and a personality inventory that was administered under applicant conditions and thereafter for research purposes. Because all participants were employed, we were also able to collect job performance ratings from their supervisors. Candidates who used IM in the interview also showed more faking in a personality inventory. Importantly, faking was positively related to supervisors’ job performance ratings, but IM was unrelated. Hence, this study gives rise to arguments for a more balanced view of self-presentation.  相似文献   

15.
Faking may affect hiring decisions in personnel selection. All the antecedents of faking are still not known. The present study investigates the association between applicants' reactions about the selection procedure and their tendency to fake. The subjects (N = 180) were real-life applicants for a fire and rescue personnel school. After completing the selection process, the applicants filled out a questionnaire about their test reactions (Chan, Schmitt, Sacco & DeSohon, 1998b) and a faking scale, the Balanced Inventory of Desirable Responding (Paulhus, 1991). The results based on Structural Equation Modelling (SEM) indicated that the more positive reactions applicant had about the selection procedure the more impression management they had. The applicant reactions were not associated with self-deception.  相似文献   

16.
The potential for applicant response distortion on personality measures remains a major concern in high‐stakes testing situations. Many approaches to understanding response distortion are too transparent (e.g., instructed faking studies) – or are too subtle (e.g., correlations with social desirability measures as indices of faking). Recent research reveals more promising approaches in two methods: using forced‐choice (FC) personality test items and warning against faking. The present study examined effects of these two methods on criterion‐related validity and test‐taker reactions. Results supported incremental validity for an FC and Likert‐scale measure in warning and no‐warning conditions, above and beyond cognitive ability. No clear differences emerged between the FC vs Likert measures or warning vs no‐warning conditions in terms of validity. However, some evidence suggested that FC measures and warnings may produce negative test‐taker reactions. We conclude with implications for implementation in selection settings.  相似文献   

17.
For practitioners, the possibility of faking on personality tests has potential implications that are much broader than those captured by current theoretical debates over criterion-related validity, factor structure, or psychological processes. One unexplored potential impact of response distortion involves the pass rates associated with applying cutoff scores developed using a concurrent validation design to applicant samples. This practitioner-oriented paper compared applicant and incumbent scores on three personality dimensions and uncovered significant standardized group differences. These differences greatly influenced pass rates for three different selection models, which impacted expected utility of the selection system. Potential solutions for practitioners are provided, along with recommendations for future research in this area. An earlier version of this paper was presented at the annual meeting of the Society for Industrial and Organizational Psychologists in Orlando, Florida.  相似文献   

18.
Researchers have recently asserted that popular measures of response distortion (i.e., socially desirable responding scales) lack construct validity (i.e., measure traits rather than test faking) and that applicant faking on personality tests remains a serious concern ( [Griffith and Peterson, 2008] and [Holden, 2008]). Thus, although researchers and human resource (HR) selection specialists have been attempting to find measures which readily capture individual differences in faking that increase personality test validity, to date such attempts have rarely, if ever succeeded. The current study, however, finds that the overclaiming technique captures individual differences in faking and subsequently increases personality test score validity via suppressing unwanted error variance in personality test scores. Implications of this research on the overclaiming technique for improving HR selection decisions are illustrated and discussed.  相似文献   

19.
Although self‐rated or self‐scored selection measures are commonly used in selection contexts, they are potentially susceptible to applicant response distortion or faking. The response elaboration technique (RET), which requires job applicants to provide supporting information to justify their responses, has been identified as a potential way to minimize applicant response distortion. In a large‐scale, high‐stakes selection context (N= 16,304), we investigate the extent to which RET affects responding on a biodata test as well as the underlying reasons for any potential effect. We find that asking job applicants to elaborate their responses leads to overall lower scores on a biodata test. Item verifiability affects the extent to which RET decreases faking, which we suggest is due to increased accountability. In addition, verbal ability was more strongly related to biodata item scores when items require elaboration, although the effect of verbal ability was small. The implications of these findings for reducing faking in personnel selection are delineated.  相似文献   

20.
Recent research suggests multidimensional forced-choice (MFC) response formats may provide resistance to purposeful response distortion on personality assessments. It remains unclear, however, whether these formats provide normative trait information required for selection contexts. The current research evaluated score correspondences between an MFC format measure and 2 Likert-type measures in honest and instructed-faking conditions. In honest response conditions, scores from the MFC measure appeared valid indicators of normative trait standing. Under faking conditions, the MFC measure showed less score inflation than the Likert measure at the group level of analysis. In the individual-level analyses, however, the MFC measure was as affected by faking as was the Likert measure. Results suggest the MFC format is not a viable method to control faking.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号