首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
A novel assessment center (AC) structure that models broad dimension factors, exercise factors, and a general performance factor is proposed and supported in 4 independent samples of AC ratings. Consistent with prior research, the variance attributable to dimension and exercise factors varied widely across ACs. To investigate the construct validity of these empirically supported components of AC ratings, the nomological network of broad dimensions, exercises, and general performance was examined. Results supported the criterion‐related validity of broad dimensions and exercises as predictors of effectiveness and success criteria as well as the incremental validity of broad dimensions beyond exercises and general performance. Finally, the relationships between individual differences and AC factors supported the construct validity of broad dimension factors and provide initial insight as to the meaning of exercise specific variance and general AC performance.  相似文献   

2.
Recent Monte Carlo research has illustrated that the traditional method for assessing the construct-related validity of assessment center (AC) post-exercise dimension ratings (PEDRs), an application of confirmatory factor analysis (CFA) to a multitrait-multimethod matrix, produces inconsistent results [Lance, C. E., Woehr, D. J., & Meade, A. W. (2007). Case study: A Monte Carlo investigation of assessment center construct validity models. Organizational Research Methods, 10, 430-448]. To avoid this shortcoming, a variance partitioning procedure was applied to the examination of the PEDRs of 193 individuals. Overall, results indicated that the person, dimension, and person by dimension interaction effects together accounted for approximately 32% of the total variance in AC ratings. However, despite no apparent exercise effect, the person by exercise interaction accounted for approximately 28% of the total variance. Although these results are drawn from a single AC, they nevertheless provide general support for the overall functioning of ACs and encourage continued application of variance partitioning approaches to AC research. Implications for AC design and research are discussed.  相似文献   

3.
To examine the appropriateness of a Multi‐Trait–Multi‐Method framework for testing construct validity of Assessment Centers (ACs) and get practical implications for the improved AC design, degree to which the AC dimension‐related performance behaviors consistently manifest across multiple AC rating situations was investigated. The present study used a large sample (N = 5,006) to apply a measurement invariance analysis. AC rating situations generally produced consistent factor loadings for items on AC dimensions, item residuals, dimension factor variances, and covariance between dimensions. The AC rating situation of interview tended to produce higher ratings and less item residuals. These findings support the consistency in constructs assessed across different AC rating situations, while some exercises may be better for teasing apart particular dimensions than others.  相似文献   

4.
Why Assessment Centers Do Not Work the Way They Are Supposed To   总被引:11,自引:10,他引:1  
Assessment centers (ACs) are often designed with the intent of measuring a number of dimensions as they are assessed in various exercises, but after 25 years of research, it is now clear that AC ratings that are completed at the end of each exercise (commonly known as postexercise dimension ratings) substantially reflect the effects of the exercises in which they were completed and not the dimensions they were designed to reflect. This is the crux of the long-standing "construct validity problem" for AC ratings. I review the existing research on AC construct validity and conclude that (a) contrary to previous notions, AC candidate behavior is inherently cross-situationally (i.e., cross-exercise) specific, not cross-situationally consistent as was once thought, (b) assessors rather accurately assess candidate behavior, and (c) these facts should be recognized in the redesign of ACs toward task- or role-based ACs and away from traditional dimension-based ACs.  相似文献   

5.
The authors reanalyzed assessment center (AC) multitrait-multimethod (MTMM) matrices containing correlations among postexercise dimension ratings (PEDRs) reported by F. Lievens and J. M. Conway (2001). Unlike F. Lievens and J. M. Conway, who used a correlated dimension-correlated uniqueness model, we used a different set of confirmatory-factor-analysis-based models (1-dimension-correlated Exercise and 1-dimension-correlated uniqueness models) to estimate dimension and exercise variance components in AC PEDRs. Results of reanalyses suggest that, consistent with previous narrative reviews, exercise variance components dominate over dimension variance components after all. Implications for AC construct validity and possible redirections of research on the validity of ACs are discussed.  相似文献   

6.
The present study replicated and extended research concerning a recently suggested conceptual model of the underlying factors of dimension ratings in assessment centers (ACs) proposed by Hoffman, Melchers, Blair, Kleinmann, and Ladd that includes broad dimension factors, exercise factors, and a general performance factor. We evaluated the criterion-related validity of these different components and expanded their nomological network. Results showed that all components (i.e., broad dimensions, exercises, general performance) were significant predictors of training performance. Furthermore, broad dimensions showed incremental validity beyond exercises and general performance. Finally, relationships between the AC factors and individual difference constructs (e.g., Big Five, core self-evaluations, positive and negative affectivity) supported the construct-related validity of broad dimensions and provided further insights in the nature of the different AC components.  相似文献   

7.
This study presents a simultaneous examination of multiple evidential bases of the validity of assessment center (AC) ratings. In particular, we combine both construct-related and criterion-related validation strategies in the same sample to determine the relative importance of exercises and dimensions. We examine the underlying structure of ACs in terms of exercise and dimension factors while directly linking these factors to a work-related criterion (salary). Results from an AC (N = 753) showed that exercise factors not only explained more variance in AC ratings than dimension factors but also were more important in predicting salary. Dimension factors explained a smaller albeit significant portion of the variance in AC ratings and had lower validity for predicting salary. The implications of these findings for AC theory, practice, and research are discussed.  相似文献   

8.
This study addresses 3 questions regarding assessment center construct validity: (a) Are assessment center ratings best thought of as reflecting dimension constructs (dimension model), exercises (exercise model), or a combination? (b) To what extent do dimensions or exercises account for variance? (c) Which design characteristics increase dimension variance? To this end, a large set of multitrait-multimethod studies (N = 34) were analyzed, showing that assessment center ratings were best represented (i.e., in terms of fit and admissible solutions) by a model with correlated dimensions and exercises specified as correlated uniquenesses. In this model, dimension variance equals exercise variance. Significantly more dimension variance was found when fewer dimensions were used and when assessors were psychologists. Use of behavioral checklists, a lower dimension-exercise ratio, and similar exercises also increased dimension variance.  相似文献   

9.
评价中心是一种高保真度的情境模拟,它被设计用来在多种与工作相关的活动中测量多项维度.30年来的大量研究发现,评价中心具备良好的内容效度和效标关联效度,但构想效度却始终不理想,评价中心评分反映的总是由活动而非预先设想的维度带来的效应.这一评价中心“构想效度谜题”吸引了大量研究关注,并逐步形成了维度中心取向、活动中心取向及交互作用取向三种主要观点,分别主张控制各种误差因素以改善维度测量、放弃维度而转向活动或任务以及关注维度与活动的共同作用.未来研究应在传统的维度中心取向之外给予活动中心取向足够重视,并重点发展交互作用取向.  相似文献   

10.
The present study uses an alternate analytical framework to examine the degree to which performance is differentiated by dimensions in assessment center (AC) exercises and whether these performance dimensions are rated on the same scale across exercises. Confirmatory factor analysis likelihood ratio tests supported the presence of three broad latent performance dimensions in each of three AC exercises. Additional tests revealed that five of six manifest performance dimensions were rated on the same psychological scale across exercises. Taken together, our results support a multidimensional interpretation of AC exercises and provide empirical support to the notion that differences in AC performance across exercises reflect true performance, rather than a measurement artifact.  相似文献   

11.
Assessment centers (ACs) are popular selection devices in which assessees are assessed on several dimensions during different exercises. Surveys indicate that ACs vary with regard to the transparency of the targeted dimensions and that the number of transparent ACs has increased during recent years. Furthermore, research on this design feature has put conceptual arguments forward regarding the effects of transparency on criterion‐related validity, impression management, and fairness perceptions. This study is the first to examine these effects using supervisor‐rated job performance data as the criterion. We conducted simulated ACs with transparency as a between‐subjects factor. The sample consisted of part‐time employed participants who would soon be applying for a new job. In line with our hypothesis, results showed that ratings from an AC with nontransparent dimensions were more criterion valid than ratings from an AC with transparent dimensions. Concerning impression management, our results supported the hypothesis that transparency moderates the relationship between self‐promotion and job performance, such that self‐promotion in the nontransparent AC was more positively related to job performance than self‐promotion in the transparent AC. The data lent no support for the hypothesis that participants’ perceptions of their opportunity to perform are higher in the transparent AC.  相似文献   

12.
The application of item response theory (IRT) models requires the identification of the data's dimensionality. A popular method for determining the number of latent dimensions is the factor analysis of a correlation matrix. Unlike factor analysis, which is based on a linear model, IRT assumes a nonlinear relationship between item performance and ability. Because multidimensional scaling (MDS) assumes a monotonic relationship this method may be useful for the assessment of a data set's dimensionality for use with IRT models. This study compared MDS, exploratory and confirmatory factor analysis (EFA and CFA, respectively) in the assessment of the dimensionality of data sets which had been generated to be either one- or two-dimensional. In addition, the data sets differed in the degree of interdimensional correlation and in the number of items defining a dimension. Results showed that MDS and CFA were able to correctly identify the number of latent dimensions for all data sets. In general, EFA was able to correctly identify the data's dimensionality, except for data whose interdimensional correlation was high.  相似文献   

13.
The purpose of this study was to expand the nomological validity of assessment centers (ACs) by investigating predictors of cross-situationally consistent versus specific aspects of AC performance. Consistent with hypotheses, (a) Big Five personality factors predicted AC performance as it related to a cross-situationally consistent general performance factor but not as it related to exercise (i.e., situationally specific) factors, and (b) job knowledge predicted performance as it related to both the general performance factor and exercise-specific factors. Results are interpreted as they relate to the growing literature on AC construct validity.  相似文献   

14.
评价中心的构想效度和结构模型   总被引:17,自引:0,他引:17  
采用多质多法和验证性因素分析的方法,对以无领导小组讨论、文件筐和人格测验构成的一个评价中心的构想效度和结构模型进行了研究。通过对136名被试在四个测评维度上的施测,其结果表明,在评价中心中会聚效度低于区分效度,影响评价中心测评结果的主要因素是测评方法而不是测评维度,从而得到了一个以测评方法为潜变量的评价中心结构模型。从该结构模型来看,评价中心之所以起作用是由于其多个测评方法(情景)的结果。表明测评情景对于构建评价中心有着至关重要的作用。  相似文献   

15.
《人类行为》2013,26(4):325-337
In an assessment center (AC), assessors generally rate an applicant's performance on multiple dimensions in just 1 exercise. This rating procedure introduces common rater variance within exercises but not between exercises. This article hypothesizes that this phenomenon is partly responsible for the consistently reported result that the AC lacks construct validity. Therefore, in this article, the rater effect is standardized on discriminant and convergent validity via a multitrait-multimethod design in which each matrix cell is based on ratings of different assessors. Two independent studies (N = 200, N = 52) showed that, within exercises, correlations decrease when common rater variance is excluded both across exercises (by having assessors rate only 1 exercise) and within exercises (by having assessors rate only 1 dimension per exercise). Implications are discussed in the context of the recent discussion around the appropriateness of the within-exercise versus the within-dimension evaluation method.  相似文献   

16.
This study examined the construct‐related validity of an assessment centre (AC) developed by a national distribution company for the selection and development of lower‐grade managers. In five locations throughout Britain, 487 individuals were observed on nine dimensions, each of which was measured through six distinct exercises. Multitrait‐multimethod analyses conducted to investigate the convergent and discriminant validity of the AC revealed strong exercise (“method”) effects. This finding was corroborated by an exploratory factor analysis showing that AC ratings clustered into factors according to exercises, rather than according to performance dimensions. A series of MANOVAs and chi‐squared tests demonstrated that neither the exercise ratings nor the selection decision were biased by sex, ethnicity, or training location, and a logistic regression determined which exercises had most impact on the final decision.  相似文献   

17.
Research indicates that assessment center (AC) ratings typically demonstrate poor construct validity; that is, they do not measure the intended dimensions of managerial performance (e.g., Sackett & Harris, 1988). The purpose of this study was to investigate the construct validity of dimension ratings from a developmental assessment center (N=102), using multitrait-multimethod analysis and factor analysis. The relationships between AC ratings, job performance ratings, and personality measures also were investigated. Results indicate that the AC ratings failed to demonstrate construct validity. The ratings did not show the expected relationships with the job performance and personality measures. Additionally, the factors underlying these ratings were found to be the AC exercises, rather than the managerial dimensions as expected. Potentially, this lack of construct validity of the dimension ratings is a serious problem for a developmental assessment center. There is little evidence that the managerial weaknesses identified by the AC are the dimensions that actually need to be improved on the job. Methods are discussed for improving the construct validity of AC ratings, for example, by decreasing the cognitive demands on the assessors.This study is based on a dissertation submitted to North Carolina State University. Portions of this paper were presented at the meeting of the Society for Industrial and Organizational Psychology in Montreal, Quebec, May, 1992. I am grateful to Paul Thayer, Bert Westbrook, James W. Cunningham, and Patrick Hauenstein for their contributions to this research. I also thank several anonymous reviewers for their comments on this article.  相似文献   

18.
In general, correlations between assessment centre (AC) ratings and personality inventories are low. In this paper, we examine three method factors that may be responsible for these low correlations: differences in (i) rating source (other versus self), (ii) rating domain (general versus specific), and (iii) rating format (multi‐ versus single item). This study tests whether these three factors diminish correlations between AC exercise ratings and external indicators of similar dimensions. Ratings of personality and performance were combined in an analytical framework following a 2 × 2 × 2 (source, domain, format) completely crossed, within subjects design. Results showed partial support for the influence of each of the three method factors. Implications for future research are discussed. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
The present research examined the influence of constructs representing social effectiveness on assessment center (AC) ratings in two samples. We expected different effects of self‐monitoring (SM) on different dimension ratings, a positive effect of the ability to identify criteria (ATIC) on the overall AC rating and a moderating effect of the ATIC on the relationship between SM and the dimension rating. Forty‐six (Study 1) and 115 (Study 2) applicants participated in ACs in field settings. Across both studies, SM had a negative effect on the integrity rating. No relationship was identified between SM and social sensitivity or problem solving ratings. In Study 1, the ATIC had a positive effect on the overall AC rating. No support was identified for a moderating effect of the ATIC on the relationship between SM and the social sensitivity rating.  相似文献   

20.
Task‐based assessment centers (TBACs) have been suggested to hold promise for practitioners and users of real‐world ACs. However, a theoretical understanding of this approach is lacking in the literature, which leads to misunderstandings. The present study tested aspects of a systems model empirically, to help elucidate TBACs and explore their inner workings. When applied to data from an AC completed by 214 managers, canonical correlation analysis revealed that extraversion, abstract reasoning, and verbal reasoning, conceptualized as inputs into a system, explained around 21% of variance in manifest assessment center behavior. Behavior, in this regard, was found to consist of both general and situationally specific elements. Results are discussed in terms of their support for a systems model and as they pertain to the literature on TBACs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号