首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
人格测验中作假的控制方法   总被引:2,自引:0,他引:2  
被试很容易对人格测验作假,这严重影响了人格测验的有效性。目前测评专家已经提出了一些应对作假的方法,它们可被分为事前控制技术和事后识别技术两大类。前者包括迫选式量表,警告及假渠道技术等,后者包括作假识别量表,IRT及反应时识别技术等。目前,在人格测验中嵌套使用作假识别量表,以及在测验指导语中加入警告是比较有效的两种方法,迫选式量表的发展也值得期待。由于研究者对作假的内部发生机制了解较少,这制约了IRT与反应时识别技术的发展。  相似文献   

2.
A situational judgment test (SJT) and a Big 5 personality test were administered to 203 participants under instructions to respond honestly and to fake good using a within‐subjects design. Participants indicated both the best and worst response (i.e., Knowledge) and the most likely and least likely response (i.e., Behavioral Tendency) to each situation. Faking effect size for the SJT Behavioral Tendency response format was (d=.34) when participants responded first under honest instructions and (d=.15) when they responded first under faking instructions. Those for the Big 5 dimensions ranged from d=.26 to d=1.0. For the Knowledge response format results were inconsistent. Honest condition Knowledge SJT scores were more highly correlated with cognitive ability (r=.56) than were Behavioral Tendency SJT scores (r=.38). Implications for researchers and practitioners are discussed.  相似文献   

3.
Faking on personality assessments remains an unsolved issue, raising major concerns regarding their validity and fairness. Although there is a large body of quantitative research investigating the response process of faking on personality assessments, for both rating scales (RS) and multidimensional forced choice (MFC), only a few studies have yet qualitatively investigated the faking cognitions when responding to MFC in a high-stakes context (e.g., Sass et al., 2020). Yet, it could be argued that only when we have a process model that adequately describes the response decisions in high stakes, can we begin to extract valid and useful information from assessments. Thus, this qualitative study investigated the faking cognitions when responding to MFC personality assessment in a high-stakes context. Through cognitive interviews with N = 32 participants, we explored and identified factors influencing the test-takers' decisions regarding specific items and blocks, and factors influencing the willingness to engage in faking in general. Based on these findings, we propose a new response process model of faking forced-choice items, the Activate-Rank-Edit-Submit (A-R-E-S) model. We also make four recommendations for practice of high-stakes assessments using MFC.  相似文献   

4.
Recent research suggests multidimensional forced-choice (MFC) response formats may provide resistance to purposeful response distortion on personality assessments. It remains unclear, however, whether these formats provide normative trait information required for selection contexts. The current research evaluated score correspondences between an MFC format measure and 2 Likert-type measures in honest and instructed-faking conditions. In honest response conditions, scores from the MFC measure appeared valid indicators of normative trait standing. Under faking conditions, the MFC measure showed less score inflation than the Likert measure at the group level of analysis. In the individual-level analyses, however, the MFC measure was as affected by faking as was the Likert measure. Results suggest the MFC format is not a viable method to control faking.  相似文献   

5.
Several faking theories have identified applicants’ cognitive ability (CA) as a determinant of faking—the intentional distortion of answers by candidates—but the corresponding empirical findings in the area of personality tests are often ambiguous. Following the assumption that CA is important for faking, we expected applicants with high CA to show higher personality scores in selection situations, leading in this case to significant correlations between CA and personality scores, but not in nonselection situations. This meta‐analysis (66 studies, k = 115 individual samples, N = 46,265) showed this pattern of results as well as moderation effects for the study design (laboratory vs. field), the response format of the personality test, and the type of CA test.  相似文献   

6.
The Narcissistic Personality Inventory (NPI) is one of the most popular measures of narcissism. However, its use of a forced-choice response set might negatively affect some of its psychometric properties. The purpose of this research was to compare a Likert version of the NPI, in which only the narcissistic response of each pair was given, to the original NPI, in 3 samples of participants (N = 1,109). To this end, we compared the nomological networks of the forced-choice and Likert formats of the NPI in relation to alternative measures of narcissism, narcissistic personality disorder, entitlement, self-esteem, general personality traits (reported by self and informants), interpersonal styles, and general pathological traits included in the DSM–5. The Likert format NPI—total and subscales—manifested similar construct validity to the original forced-choice format across all criteria with only minor differences that seem to be due mainly to the increased reliability and variability found in the Likert NPI Entitlement/Exploitativeness subscale. These results provide evidence that a version of the NPI that employs a Likert format can justifiably be used in place of the original.  相似文献   

7.
In a globalised world, more and more organisations have to select from pools of applicants from different cultures, often by using personality tests. If applicants from different cultures were to differ in the amount of faking on personality tests, this could threaten their validity: Applicants who engage in faking will have an advantage, and will put those who do not fake at a disadvantage. This is the first study to systematically examine and explain cross‐cultural differences in actual faking behavior. In N = 3,678 employees from 43 countries, a scenario‐based repeated measures design (faking vs. honest condition) was applied. Results showed that faking differed significantly across countries, and that it was systematically related to countries’ cultural characteristics (e.g. GLOBE's uncertainty avoidance, future orientation, humane orientation, and in‐group collectivism), but in an unexpected way. The study discusses these findings and their implications for research and practitioners.  相似文献   

8.

Purpose

Item response time (RT) latencies offer a potentially promising approach for measuring faking in personnel testing, but have been studied almost exclusively as either long or short RTs relative to group norms. As such, the ability to reliably assess faking RTs at the individual level remains a challenge. To address this issue, the present study set out to examine the usefulness of a within-person difference score index (DSI) method for measuring faking, in which “control question” (baseline) RTs were compared to “target question” RTs, within single test administrations.

Design/Methodology/Approach

Two hundred six participants were randomly selected to simulated faking or honest testing conditions, and were administered two types of integrity test items (overt and personality), whereby group classification (faking/honest) served as the main dependent variable.

Findings

Faking condition RTs were longer than honest condition RTs for both item types (overt: d = .43; personality: d = .47), and overt item RTs were slightly shorter than personality item RTs in both testing conditions (honest: d = .34; faking: d = .41). Finally, using a sample cut score, the DSI correctly classified an average of 26 % more cases of faking, and 53 % less false positives, compared to the traditional normative method.

Implications

The results suggest that the DSI can be an advantageous method for identifying faking in personnel testing scenarios.

Originality/Value

This is the one of the first studies to propose a practical method for identifying individual-level faking RTs within single test administrations.
  相似文献   

9.
《人类行为》2013,26(4):371-388
We evaluated the effects of faking on mean scores and correlations with self-reported counterproductive behavior of integrity-related personality items administered in single-stimulus and forced-choice formats. In laboratory studies, we found that respondents instructed to respond as if applying for a job scored higher than when given standard or "straight-take" instructions. The size of the mean shift was nearly a full standard deviation for the single-stimulus integrity measure, but less than one third of a standard deviation for the same items presented in a forced-choice format. The correlation between the personality questionnaire administered in the single-stimulus condition and self-reported workplace delinquency was much lower in the job applicant condition than in the straight-take condition, whereas the same items administered in the forced-choice condition maintained their substantial correlations with workplace delinquency.  相似文献   

10.
This article discusses 3 modern forced-choice personality tests developed for the U.S Armed Services to provide resistance to faking and other forms of response distortion: the Assessment of Individual Motivation, the Navy Computerized Adaptive Personality Scales, and the Tailored Adaptive Personality Assessment System. These tests represent the transition from Likert to forced-choice formats and from static to computerized adaptive item selection to meet the challenges of large-scale, high-stakes testing environments. For each test, we briefly describe the personality constructs that are assessed, the response format and scoring methods, and selected ongoing research and development efforts. We also highlight the potential of these tests for personnel selection, classification, and diagnostic screening purposes.  相似文献   

11.
Recent research has highlighted competitive worldviews as a key predictor of faking—the intentional distortion of answers by candidates in the selection context. According to theoretical assumptions, applicants’ abilities, and especially their cognitive abilities, should influence whether faking motivation, triggered by competitive worldviews, can be turned into successful faking behavior. Therefore, we examined the influence of competitive worldviews on faking in personality tests and investigated a possible moderation of this relationship by cognitive abilities in three independent high school and university student samples (N1 = 133, N2 = 137, N3 = 268). Our data showed neither an influence of the two variables nor of their interaction on faking behavior. We discuss possible reasons for these findings and give suggestions for further research.  相似文献   

12.
We examined the effects of coaching and speeding on personality scale scores in a faking context (N = 192). A completely crossed 2 × 2 experimental design was used in which instructions (no coaching or coaching) and speeding (with or without a time limit) were manipulated. No statistically significant effects on scale scores were evidenced for speeding. Coaching participants significantly elevated scores (average d = .76) for each of the Big Five personality factors but did not significantly elevate the scores on the Impression Management scale (d = .06). Cognitive ability was significantly positively related to impression management for uncoached participants but not for coached participants. An exploratory simulation suggests that coaching would have an effect on who would be selected for a job.  相似文献   

13.
This study presents a new method for developing faking detection scales based on idiosyncratic item‐response patterns. Two scoring schemes based on this approach strongly differentiated between scores obtained under honest vs directed faking conditions in cross‐validation samples (rpb=.45 and .67). This approach is shown to successfully classify between 20% and 37% of faked personality measures with only a 1% false positive rate in a sample comprised of 56% honest responses. Of equal importance, this method does not result in a scale that meaningfully correlates with personality or cognitive ability tests. This study raises many questions about both the source and generalizabiltiy of the effect. Key directions for future research and improved scale development that may limit or enhance the utility of the idiosyncratic item‐response method are discussed.  相似文献   

14.
Extant research (e.g., Wilks et al. 2016; Williams et al. 2010) has shown personality to be a predictor of engagement in academic dishonesty. The current study seeks to determine whether the type of personality measure affects predictive efficacy by comparing single stimulus and forced-choice measures of personality using a sample of 278 undergraduate students in two U.S. universities. Students scoring high on conscientiousness reported as engaging in fewer academic cheating behaviors than those scoring low on conscientiousness regardless of whether conscientiousness was measured using the forced-choice or single stimulus scale format. In addition, the forced-choice and single stimulus measures each contributed significant unique variance to prediction of academic dishonesty. For agreeableness, scores on the single stimulus measure were negatively correlated with academic dishonesty whereas there was a positive relationship found for the forced-choice measure. Overall, the forced-choice format of the Occupational Personality Questionnaire 32r (OPQ32r) did not show higher validities than the single stimulus IPIP counterpart in predicting self-reported academic dishonesty. Implications for future research and management education are discussed.  相似文献   

15.
The present study evaluated the ability of item‐level bifactor models (a) to provide an alternative explanation to current theories of higher order factors of personality and (b) to explain socially desirable responding in both job applicant and non‐applicant contexts. Participants (46% male; mean age = 42 years, SD = 11) completed the 200‐item HEXACO Personality Inventory‐Revised either as part of a job application (n = 1613) or as part of low‐stakes research (n = 1613). A comprehensive set of invariance tests were performed. Applicants scored higher than non‐applicants on honesty‐humility (d = 0.86), extraversion (d = 0.73), agreeableness (d = 1.06), and conscientiousness (d = 0.77). The bifactor model provided improved model fit relative to a standard correlated factor model, and loadings on the evaluative factor of the bifactor model were highly correlated with other indicators of item social desirability. The bifactor model explained approximately two‐thirds of the differences between applicants and non‐applicants. Results suggest that rather than being a higher order construct, the general factor of personality may be caused by an item‐level evaluative process. Results highlight the importance of modelling data at the item‐level. Implications for conceptualizing social desirability, higher order structures in personality, test development, and job applicant faking are discussed. Copyright © 2017 European Association of Personality Psychology  相似文献   

16.
This research assessed whether warning subjects that faked responses could be detected would reduce the amount of faking that might occur when using a personality test for selection of police officers. Also, personality test subscales which best differentiated honest from dissimulated responses were determined. Subjects (N=120) were randomly assigned to a straight-take (that is, respond honestly), fake good, or modified fake good group. Both fake good groups were instructed to respond to the test so as to appear favourably for the job; additionally, the modified fake good group was warned that faking could be detected and could reduce hiring chances. Multivariate analyses revealed significant differences on the Denial and Deviation subscales between the three conditions (p <0.01). The pattern of differences suggested that the threat of faking detection reduced faking. Potential application of these findings in personnel selection was discussed.  相似文献   

17.
《人类行为》2013,26(3):175-199
This study examined the effects of item format (single-stimulus vs. forced-choice) and response motivation (honest vs. applicant) on scores for personality scales measuring Conscientiousness and Openness to Experience. Consistent with the hypotheses, cognitive ability was related to forced-choice personality scores in the applicant condition but not in the honest condition. Cognitive ability was unrelated to single-stimulus personality scores in both the applicant and honest conditions. The results suggest that controlling for cognitive ability can reduce the incremental predictive validity of forced-choice personality scales in applicant settings. Findings are discussed in terms of the importance of considering how item format influences the construct and criterion-related validity of personality tests used to make selection decisions.  相似文献   

18.
A two-alternative forced-choice test, two putative malingering tests, and four neuropsychological tests were administered to 105 prison inmates (51 males and 54 females) and 108 university students (54 males and 54 females) in one of three conditions: naive faking, coached faking, and control. Six of the seven tests differentiated faking subjects from controls, but only the forced-choice test differentiated between naive and coached faking. Even though only 11% of the faking subjects performed below the level of chance on the forced-choice test, this test was more sensitive than other tests in distinguishing between faking subjects and controls. The putative malingering tests were the least sensitive measures. The most salient difference between inmates and students was that faking inmates did not respond to a bogus difficulty manipulation in the forced-choice test. The results indicate that the forced-choice method is a sensitive means of detecting dishonest performance even when scores do not fall below chance.  相似文献   

19.
Many companies recruit employees from different parts of the globe, and faking behavior by potential employees is a ubiquitous phenomenon. It seems that applicants from some countries are more prone to faking compared to others, but the reasons for these differences are largely unexplored. This study relates country-level economic variables to faking behavior in hiring processes. In a cross-national study across 20 countries, participants (N = 3,839) reported their faking behavior in their last job interview. This study used the random response technique (RRT) to ensure participants’ anonymity and to foster honest answers regarding faking behavior. Results indicate that general economic indicators (gross domestic product per capita [GDP] and unemployment rate) show negligible correlations with faking across the countries, whereas economic inequality is positively related to the extent of applicant faking to a substantial extent. These findings imply that people are sensitive to inequality within countries and that inequality relates to faking, because inequality might actuate other psychological processes (e.g., envy) which in turn increase the probability for unethical behavior in many forms.  相似文献   

20.
迫选测验的传统计分方式会产生自模式数据, 不能进行传统的信效度检验、因素分析和方差分析等。近年来研究者提出了一些基于项目反应理论的计分模型, 如瑟斯顿IRT模型和MUPP模型等, 它们可以规避自模式数据的弊端。瑟斯顿IRT模型方便进行参数估计, 模型定义灵活; 而MUPP模型的拓展性较差, 参数估计的方法有待提高。另一方面, 已有研究者基于MUPP模型开发了一些抗作假的迫选测验, 而瑟斯顿IRT模型距离这种应用还比较远。此外, 两个模型的适用性和有效性都有待更多的实证研究来检验。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号