首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Intentional response distortion or faking among job applicants completing measures such as personality and integrity tests is a concern in personnel selection. The present study aimed to investigate whether eye-tracking technology can improve our understanding of the response process when faking. In an experimental within-participants design, a Big Five personality test and an integrity measure were administered to 129 university students in 2 conditions: a respond honestly and a faking good instruction. Item responses, response latencies, and eye movements were measured. Results demonstrated that all personality dimensions were fakeable. In support of the theoretical position that faking involves a less cognitively demanding process than responding honestly, we found that response times were on average 0.25 s slower and participants had less eye fixations in the fake good condition. However, in the fake good condition, participants had more fixations on the 2 extreme response options of the 5-point answering scale, and they fixated on these more directly after having read the question. These findings support the idea that faking leads to semantic rather than self-referenced item interpretations. Eye-tracking was demonstrated to be potentially useful in detecting faking behavior, improving detecting rates over and beyond response extremity and latency metrics.  相似文献   

2.
This study employed the Basic Personality Inventory (BPI) to differentiate various types of dis-simulation, including malingered psychopathology and faking good, by inmates. In particular, the role of intelligence in utilizing symptom information to successfully malinger was examined. On admission to a correctional facility, 161 inmates completed the BPI under standard instructions and then again under instructions to fake good (n = 55) or to malinger psychotic (n = 35), posttraumatic stress disorder (n = 36), or somatoform (n = 35) psychopathology. Unlike symptom information, intelligence evidenced some support for increasing inmates' effectiveness in malingering, although there was no relationship between higher intelligence and using symptom information to successfully evade detection. Overall, the BPI was more effective in detecting malingered psychopathology than faking good. Implications for the detection of dissimulation in correctional and forensic settings are discussed.  相似文献   

3.
This study set out to examine the susceptibility of five extensively used, self-report measures to response set bias. Subjects were requested either to fake good, (give a good impression), fake bad, (give a bad impression), fake mad, (give an impression of mental instability) or respond honestly. Subjects who faked good had significantly higher Extraversion, Lie and Social Desirability scores but lowest Neuroticism, Psychoticism and Social Anxiety scores. Subjects who faked bad had significantly lower Extraversion and higher Psychoticism and Social Anxiety scores. Fake mad subjects scored higher on Self-Monitoring and Locus of Control. Four of the eight scales showed significant differences between subjects faking bad and those faking mad. The results are discussed in terms of questionnaire design and respondent's motivation.  相似文献   

4.
The faking-detection validity and incremental validity of response latencies to Minnesota Multiphasic Personality inventory (MMPI) items was investigated using an analog research design. One hundred undergraduates were assigned at random to five groups: each group received different faking instructions (standard, fake good, fake bad, fake good with incentive, fake bad with incentive). All subjects completed a computer-administered version of the MMPI. Content-determined response deviance scores and latencies of responses to Subtle and Obvious scale items were determined for each subject. The principal findings suggest that response latencies may have greater faking good detection ability than responses deviance scores, and that response latencies have statistically significant incremental validity for both the detection of faking good and faking bad, when latencies are used with response deviance scores obtained from Subtle and Obvious scales.  相似文献   

5.
This study examines the behavior of the Millon Clinical Multiaxial Inventory-II (MCMI-II) in the face of various response styles and biases. The profiles and validity configurations of eight different test-taking styles were analyzed. Four hundred MCMI-II inventories (50 for each of the following categories) were administered or generated to produce the following: (a) normal endorsement by subjects, (b) fake good for administrative reasons, (c) fake good for clinical reasons, (d) fake bad administratively, (e) fake bad clinically, (f) 50% true/50% false computer generated, (g) 95% true computer generated, and (h) 95% false computer generated. Good statistical and clinically relevant separation of the profiles was found for normal, fake good, fake bad, and the randomly generated profiles with 44% of the variance predicted. The percentage of profiles identified by validity scales, however, was modest.  相似文献   

6.
This study examines the behavior of the Millon Clinical Multiaxial Inventory-II (MCMI-II) in the face of various response styles and biases. The profiles and validity configurations of eight different test-taking styles were analyzed. Four hundred MCMI-II inventories (50 for each of the following categories) were administered or generated to produce the following: (a) normal endorsement by subjects, (b) fake good for administrative reasons, (c) fake good for clinical reasons, (d) fake bad administratively, (e) fake bad clinically, (f) 50% true/50% false computer generated, (g) 95% true computer generated, and (h) 95% false computer generated. Good statistical and clinically relevant separation of the profiles was found for normal, fake good, fake bad, and the randomly generated profiles with 44% of the variance predicted. The percentage of profiles identified by validity scales, however, was modest.  相似文献   

7.
Faking is a common problem in testing with self‐report personality tests, especially in high‐stakes situations. A possible way to correct for it is statistical control on the basis of social desirability scales. Two such scales were developed and applied in the present paper. It was stressed that the statistical models of faking need to be adapted to different properties of the personality scales, since such scales correlate with faking to different extents. In four empirical studies of self‐report personality tests, correction for faking was investigated. One of the studies was experimental, and asked participants to fake or to be honest. In the other studies, job or school applicants were investigated. It was found that the approach to correct for effects of faking in self‐report personality tests advocated in the paper removed a large share of the effects, about 90%. It was found in one study that faking varied as a function of degree of how important the consequences of test results could be expected to be, more high‐stakes situations being associated with more faking. The latter finding is incompatible with the claim that social desirability scales measure a general personality trait. It is concluded that faking can be measured and that correction for faking, based on such measures, can be expected to remove about 90% of its effects.  相似文献   

8.
This paper presents the results of three interrelated studies investigating the occurrence of response distortion on personality questionnaires within selection and the success of applicants in faking situations. In Study 1, comparison of the Big Five personality scores obtained from applicants in a military pilot cadet selection procedure with participants responding honestly, faking good, and faking an ideal candidate revealed that applicants responded more desirable than participants responding honestly but less desirable than respondents under fake instructions. The occurrence of faking within the military pilot selection process was replicated in Study 2 using the Eysenck Personality Questionnaire and another comparison group. Finally, in Study 3, comparison of personality profiles obtained in selection and ‘fake job’ situations with experts' estimates indicated that participants were partially successful in faking the desirable profile.  相似文献   

9.
Validity scales indicate the extent to which the results of a self-report inventory are a valid indicator of the test taker's psychological functioning. Validity scales generally are designed to detect the common response sets of positive impression management (underreporting, or faking good), negative impression management (overreporting, or faking bad), and random responding. The revised NEO Personality Inventory (NEO-PI-R; Costa & McCrae, 1992b) is a popular personality assessment tool based on the 5-factor model of personality and is used in a variety of settings. The NEO-PI-R does not include objective validity scales to screen for positive or negative impression management. The purpose of this study was to examine the utility of recently proposed validity scales for detecting these response sets on the NEO-PI-R (Schinka, Kinder, & Kremer, 1997) and to examine the effects of positive and negative impression management on correlations between the NEO-PI-R and external criteria (the Interpersonal Adjective Scale-Revised-B5 [Wiggins & Trapnell, 1997] and the NEO-PI-R Form R). The validity scales discriminated with reasonable accuracy between standard responding and the 2 response sets. Additionally, most correlations between the NEO-PI-R and external criteria were significantly lower when participants were dissimulating than when responding to standard instructions. It appears that response sets of positive and negative impression management may pose a significant threat to the external validity of the NEO-PI-R and that validity scales for their detection might be a useful addition to the inventory.  相似文献   

10.
In this study, we sought to explore the diagnostic accuracy of the Personality Assessment Inventory (PAI; Morey, 1991) Validity scales (Negative Impression Management [NIM] and Positive Impression Management [PIM]) and indexes (Malingering index, Defensiveness index [DEF]; Morey, 1993, 1996; Cashel Discriminant Function; Cashel, Rogers, Sewell, & Martin-Cannici, 1995; and Rogers Discriminant Function [RDF]; Rogers, Sewell, Morey, & Ustad, 1996) to identify differences in profiles completed by psychiatric inpatients under standardized instructions (Time 1) and after random assignment (Time 2) to a fake good (n=21), fake bad (n=20), or retest (n=21) scenario. Repeated measures analysis of variance revealed a significant interaction effect. Whereas the retest group did not show any significant changes on the PAI variables from Time 1 to Time 2, both faking groups showed changes in expected directions. Discriminant function analyses revealed that NIM, RDF, and lower scores on DEF best differentiated between the faking bad and retest groups. PIM was the only nonredundant significant score discriminating the faking good and retest groups. Cutoffs for these scales and indexes established in prior research were supported using diagnostic efficiency statistics. Results suggest that NIM and RDF in faking bad scenarios and PIM in faking good scenarios are most sensitive to unsophisticated attempts to dissimulate by inpatient psychiatric patients.  相似文献   

11.
This study examined the extent to which the validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) could identify subjects who were faking bad or faking good and differentiate between psychiatric patients and normal subjects who were faking bad. Subjects were 106 undergraduate college students and 50 psychiatric patients. Results indicate that the mean profiles and optimal cutoff scores resembled those previously reported for the original MMPI. Accurate identification of persons who were faking bad or faking good was achieved. It was possible to differentiate between the psychiatric patients and normal persons who were faking bad, but different cutoff scores were needed to differentiate between normals taking the test under standard instructions and those instructed to fake bad. Optimal cutoff scores were suggested.  相似文献   

12.
This study examined the extent to which the validity scales of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) could identify subjects who were faking bad or faking good and differentiate between psychiatric patients and normal subjects who were faking bad. Subjects were 106 undergraduate college students and 50 psychiatric patients. Results indicate that the mean profiles and optimal cutoff scores resembled those previously reported for the original MMPI. Accurate identification of persons who were faking bad or faking good was achieved. It was possible to differentiate between the psychiatric patients and normal persons who were faking bad, but different cutoff scores were needed to differentiate between normals taking the test under standard instructions and those instructed to fake bad. Optimal cutoff scores were suggested.  相似文献   

13.
The present research tested a model that integrated the theory of planned behavior (TPB) with a model of faking presented by McFarland and Ryan (2000) to predict faking on a personality test. In Study 1, the TPB explained sizable variance in the intention to fake. In Study 2, the TPB explained both the intention to fake and actual faking behavior. Different faking measures (i.e., difference scores and social desirability scales) tended to yield similar conclusions, but the difference scores were more strongly related to the variables in the model. These results provide support for a model that may increase understanding of applicant faking behavior and suggest reasons for the discrepancies in past research regarding the prevalence and consequences of faking.  相似文献   

14.
A concern about personality inventories in diagnostic and decision-making contexts is that individuals will fake. Although there is extensive research on faking, little research has focused on how perceptions of personality items change when individuals are faking or responding honestly. This research demonstrates how the delta parameter from the generalized graded unfolding item response theory model can be used to examine how individuals’ perceptions about personality items might change when responding honestly or when faking. The results indicate that perceptions changed from honest to faking conditions for several neuroticism items. The direction of the change varied, indicating that faking can operate to increase or decrease scores within a personality factor.  相似文献   

15.
This research assessed whether warning subjects that faked responses could be detected would reduce the amount of faking that might occur when using a personality test for selection of police officers. Also, personality test subscales which best differentiated honest from dissimulated responses were determined. Subjects (N=120) were randomly assigned to a straight-take (that is, respond honestly), fake good, or modified fake good group. Both fake good groups were instructed to respond to the test so as to appear favourably for the job; additionally, the modified fake good group was warned that faking could be detected and could reduce hiring chances. Multivariate analyses revealed significant differences on the Denial and Deviation subscales between the three conditions (p <0.01). The pattern of differences suggested that the threat of faking detection reduced faking. Potential application of these findings in personnel selection was discussed.  相似文献   

16.
Researchers have recently asserted that popular measures of response distortion (i.e., socially desirable responding scales) lack construct validity (i.e., measure traits rather than test faking) and that applicant faking on personality tests remains a serious concern ( [Griffith and Peterson, 2008] and [Holden, 2008]). Thus, although researchers and human resource (HR) selection specialists have been attempting to find measures which readily capture individual differences in faking that increase personality test validity, to date such attempts have rarely, if ever succeeded. The current study, however, finds that the overclaiming technique captures individual differences in faking and subsequently increases personality test score validity via suppressing unwanted error variance in personality test scores. Implications of this research on the overclaiming technique for improving HR selection decisions are illustrated and discussed.  相似文献   

17.
Effects of the testing situation on item responding: cause for concern   总被引:6,自引:0,他引:6  
The effects of faking on personality test scores have been studied previously by comparing (a) experimental groups instructed to fake or answer honestly, (b) subgroups created from a single sample of applicants or nonapplicants by using impression management scores, and (c) job applicants and nonapplicants. In this investigation, the latter 2 methods were used to study the effects of faking on the functioning of the items and scales of the Sixteen Personality Factor Questionnaire. A variety of item response theory methods were used to detect differential item/test functioning, interpreted as evidence of faking. The presence of differential item/test functioning across testing situations suggests that faking adversely affects the construct validity of personality scales and that it is problematic to study faking by comparing groups defined by impression management scores.  相似文献   

18.
There are discrepant findings in the literature regarding the effects of applicant faking on the validity of noncognitive measures. One explanation for these mixed results may be the failure of some studies to consider individual differences in faking. This study demonstrates that there is considerable variance across individuals in the extent of faking 3 types of noncognitive measures (i.e., personality test, biodata inventory, and integrity test). Participants completed measures honestly and with instructions to fake. Results indicated some measures were more difficult to fake than others. The authors found that integrity, conscientiousness, and neuroticism were related to faking. In addition, individuals faked fairly consistently across the measures. Implications of these results and a model of faking that includes factors that may influence faking behavior are provided.  相似文献   

19.
20.
Although personality tests are widely used to select applicants for a variety of jobs, there is concern that such measures are fakable. One procedure used to minimize faking has been to disguise the true intent of personality tests by randomizing items such that items measuring similar constructs are dispersed throughout the test. In this study, we examined if item placement does influence the fakability and psychometric properties of a personality measure. Study participants responded to 1 of 2 formats (random vs. grouped items) of a personality test honestly and also under instructions to fake or to behave like an applicant. Results indicate that the grouped item placement format was more fakable for the Neuroticism and Conscientiousness scales. The test with items randomly placed fit the data better within the honest and applicant conditions. These findings demonstrate that the issue of item placement should be seriously considered before administering personality measures because different item presentations may affect the incidence of faking and the psychometric properties of the measure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号