首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article introduces a new method for assessing personality traits that uses graphic rather than written items to facilitate the convergent validation of personality traits. A computer-administered visual analog procedure is presented that samples 40 equally spaced positions along a trait continuum five times. Examinees initiate each trial by pressing the space bar and end it by pressing either a “True” or “False” key indicating if that location describes them or not. Response time is measured in milliseconds. The majority answer determines the aggregate response and the median response time determines the latency for the aggregate response to each location on the trait continuum. This procedure enables an empirical measure of trait variability. Results indicated theoretically meaningful responses in 94 college students on each of two personality dimensions; extraversion and trait anxiety. The predicted inverted-U function was obtained for both personality dimensions such that the fastest response times were associated with 0 and 5 “True” responses, somewhat longer response times were associated with 1 and 4 “True” responses, and the longest times were associated with 2 or 3 “True” responses. Statistically significant and substantial validity coefficients were obtained with the Eysenck personality inventory extraversion scale, and the state-trait anxiety inventory, form Y-2.  相似文献   

2.
Recent applications of item response tree models demonstrate that this model class is well suited to detect midpoint and extremity response style effects in both attitudinal and personality measurements. This paper proposes an extension of this approach that goes beyond measuring response styles and allows us to examine item-feature effects. In a reanalysis of three published data sets, it is shown that the proposed extension captures item-feature effects across affirmative and reverse-worded items in a psychological test. These effects are found to affect directional responses but not midpoint and extremity preferences. Moreover, accounting for item-feature effects substantially improves model fit and interpretation of the construct measurement. The proposed extension can be implemented readily with current software programs that facilitate maximum likelihood estimation of item response models with missing data.  相似文献   

3.
Current approaches to model responses and response times to psychometric tests solely focus on between-subject differences in speed and ability. Within subjects, speed and ability are assumed to be constants. Violations of this assumption are generally absorbed in the residual of the model. As a result, within-subject departures from the between-subject speed and ability level remain undetected. These departures may be of interest to the researcher as they reflect differences in the response processes adopted on the items of a test. In this article, we propose a dynamic approach for responses and response times based on hidden Markov modeling to account for within-subject differences in responses and response times. A simulation study is conducted to demonstrate acceptable parameter recovery and acceptable performance of various fit indices in distinguishing between different models. In addition, both a confirmatory and an exploratory application are presented to demonstrate the practical value of the modeling approach.  相似文献   

4.
For testlet response data, traditional item response theory (IRT) models are often not appropriate due to local dependence presented among items within a common testlet. Several testlet‐based IRT models have been developed to model examinees' responses. In this paper, a new two‐parameter normal ogive testlet response theory (2PNOTRT) model for dichotomous items is proposed by introducing testlet discrimination parameters. A Bayesian model parameter estimation approach via a data augmentation scheme is developed. Simulations are conducted to evaluate the performance of the proposed 2PNOTRT model. The results indicated that the estimation of item parameters is satisfactory overall from the viewpoint of convergence. Finally, the proposed 2PNOTRT model is applied to a set of real testlet data.  相似文献   

5.
Missing not at random (MNAR) modeling for non-ignorable missing responses usually assumes that the latent variable distribution is a bivariate normal distribution. Such an assumption is rarely verified and often employed as a standard in practice. Recent studies for “complete” item responses (i.e., no missing data) have shown that ignoring the nonnormal distribution of a unidimensional latent variable, especially skewed or bimodal, can yield biased estimates and misleading conclusion. However, dealing with the bivariate nonnormal latent variable distribution with present MNAR data has not been looked into. This article proposes to extend unidimensional empirical histogram and Davidian curve methods to simultaneously deal with nonnormal latent variable distribution and MNAR data. A simulation study is carried out to demonstrate the consequence of ignoring bivariate nonnormal distribution on parameter estimates, followed by an empirical analysis of “don’t know” item responses. The results presented in this article show that examining the assumption of bivariate nonnormal latent variable distribution should be considered as a routine for MNAR data to minimize the impact of nonnormality on parameter estimates.  相似文献   

6.
This article describes a model for response times that is proposed as a supplement to the usual factor-analytic model for responses to graded or more continuous typical-response items. The use of the proposed model together with the factor model provides additional information about the respondent and can potentially increase the accuracy of the individual trait estimates. First, the rationale of the model is discussed in relation to previous developments in binary responses. Second, procedures for fitting the model and for assessing model-data fit at both overall and individual level (person-fit) are proposed. Third, the usefulness of the model and its potential applications in the typical-response domain are discussed. All the proposed developments are used in 2 empirical applications in the personality domain. The first application analyzes 2 scales from a Big Five questionnaire. The second example analyzes a sociability scale developed from Eysenck's questionnaires.  相似文献   

7.
Using SAS PROC NLMIXED to fit item response theory models   总被引:1,自引:0,他引:1  
Researchers routinely construct tests or questionnaires containing a set of items that measure personality traits, cognitive abilities, political attitudes, and so forth. Typically, responses to these items are scored in discrete categories, such as points on a Likert scale or a choice out of several mutually exclusive alternatives. Item response theory (IRT) explains observed responses to items on a test (questionnaire) by a person’s unobserved trait, ability, or attitude. Although applications of IRT modeling have increased considerably because of its utility in developing and assessing measuring instruments, IRT modeling has not been fully integrated into the curriculum of colleges and universities, mainly because existing general purpose statistical packages do not provide built-in routines with which to perform IRT modeling. Recent advances in statistical theory and the incorporation of those advances into general purpose statistical software such as the Statistical Analysis System (SAS) allow researchers to analyze measurement data by using a class of models known as generalized linear mixed effects models (McCulloch & Searle, 2001), which include IRT models as special cases. The purpose of this article is to demonstrate the generality and flexibility of using SAS to estimate IRT model parameters. With real data examples, we illustrate the implementations of a variety of IRT models for dichotomous, polytomous, and nominal responses. Since SAS is widely available in educational institutions, it is hoped that this article will contribute to the spread of IRT modeling in quantitative courses.  相似文献   

8.
We conducted two experimental studies with between-subjects and within-subjects designs to investigate the item response process for personality measures administered in high- versus low-stakes situations. Apart from assessing measurement validity of the item response process, we examined predictive validity; that is, whether or not different response models entail differential selection outcomes. We found that ideal point response models fit slightly better than dominance response models across high- versus low-stakes situations in both studies. Additionally, fitting ideal point models to the data led to fewer items displaying differential item functioning compared to fitting dominance models. We also identified several items that functioned as intermediate items in both the faking and honest conditions when ideal point models were fitted, suggesting that ideal point model is “theoretically” more suitable across these contexts for personality inventories. However, the use of different response models (dominance vs. ideal point) did not have any substantial impact on the validity of personality measures in high-stakes situations, or the effectiveness of selection decisions such as mean performance or percent of fakers selected. These findings are significant in that although prior research supports the importance and use of ideal point models for measuring personality, we find that in the case of personality faking, though ideal point models seem to have slightly better measurement validity, the use of dominance models may be adequate with no loss to predictive validity.  相似文献   

9.
Factor analysis models have played a central role in formulating conceptual models in personality and personality assessment, as well as in empirical examinations of personality measurement instruments. Yet, the use of item-level data presents special problems for factor analysis, applications. In this article, we review recent developments in factor analysis that are appropriate for the type of item-level data often collected in personality. Included in this review are discussions of how these developments have been addressed in the context of two different (but formally related) statistical models item response theory (IRT: Hambleton, Swaminathan, & Rogers, 1991) and structural, equation modeling (Bollen 1989) for item-level data. We also discuss the relevance of item scaling in the context of these models. Using the restandardization data for the Minnesota Multiphasic Personality Inventory-2 Scale (cf. Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), we show brief examples of the utility of these approaches to address basic questions about responses to personality scale items regarding: (a) scale, dimensionality and general item properties, (b) the "appropriateness" of the observed responses, and (c) differential item functioning across subsamples. implications for analyses of personality item-level data in the IRT and factor analytic traditions are discussed.  相似文献   

10.
A loglinear IRT model is proposed that relates polytomously scored item responses to a multidimensional latent space. The analyst may specify a response function for each response, indicating which latent abilities are necessary to arrive at that response. Each item may have a different number of response categories, so that free response items are more easily analyzed. Conditional maximum likelihood estimates are derived and the models may be tested generally or against alternative loglinear IRT models.Hank Kelderman is currently affiliated with Vrije Universiteit, Amsterdam.We thank Linda Vodegel-Matzen of the Division of Developmental Psychology of the University of Amsterdam for making available the data used in the example in this article.  相似文献   

11.
In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach.  相似文献   

12.
The process of responding to attitude items was broken down into a series of cognitive stages and a model offered. To test this model, subjects responded to attitude items varying in Extremity under two or five response alternative formats. By measuring response times, and applying Sternberg's (1969) additive factor method, the model was supported. The results were discussed in terms of previous process work involving personality items and sentence verification tasks.  相似文献   

13.
Faking on personality assessments remains an unsolved issue, raising major concerns regarding their validity and fairness. Although there is a large body of quantitative research investigating the response process of faking on personality assessments, for both rating scales (RS) and multidimensional forced choice (MFC), only a few studies have yet qualitatively investigated the faking cognitions when responding to MFC in a high-stakes context (e.g., Sass et al., 2020). Yet, it could be argued that only when we have a process model that adequately describes the response decisions in high stakes, can we begin to extract valid and useful information from assessments. Thus, this qualitative study investigated the faking cognitions when responding to MFC personality assessment in a high-stakes context. Through cognitive interviews with N = 32 participants, we explored and identified factors influencing the test-takers' decisions regarding specific items and blocks, and factors influencing the willingness to engage in faking in general. Based on these findings, we propose a new response process model of faking forced-choice items, the Activate-Rank-Edit-Submit (A-R-E-S) model. We also make four recommendations for practice of high-stakes assessments using MFC.  相似文献   

14.
Using Lumsden’s Thurstonian fluctuation model as a starting point, this paper attempts to develop a unidimensional item response theory model intended for binary personality items. Under some additional assumptions, a new model is obtained in which the item characteristic curves are defined by a cumulative Pearson-Type-VII distribution, and the person response curves are two-parameter normal ogives. Procedures for fitting the new model are proposed. Furthermore, the relations between individual fluctuation and scalability are discussed, and a scalability index based on the new model is proposed. All the developments in this paper are illustrated using two empirical examples.  相似文献   

15.
Adaptive learning and assessment systems support learners in acquiring knowledge and skills in a particular domain. The learners’ progress is monitored through them solving items matching their level and aiming at specific learning goals. Scaffolding and providing learners with hints are powerful tools in helping the learning process. One way of introducing hints is to make hint use the choice of the student. When the learner is certain of their response, they answer without hints, but if the learner is not certain or does not know how to approach the item they can request a hint. We develop measurement models for applications where such on-demand hints are available. Such models take into account that hint use may be informative of ability, but at the same time may be influenced by other individual characteristics. Two modeling strategies are considered: (1) The measurement model is based on a scoring rule for ability which includes both response accuracy and hint use. (2) The choice to use hints and response accuracy conditional on this choice are modeled jointly using Item Response Tree models. The properties of different models and their implications are discussed. An application to data from Duolingo, an adaptive language learning system, is presented. Here, the best model is the scoring-rule-based model with full credit for correct responses without hints, partial credit for correct responses with hints, and no credit for all incorrect responses. The second dimension in the model accounts for the individual differences in the tendency to use hints.  相似文献   

16.
认知诊断评估旨在探讨个体内部的知识掌握结构,并提供关于学生优缺点的详细诊断信息,以促进个体的全面发展。当前研究者已开发了大量0-1评分的认知诊断模型,但对于多级评分认知诊断模型的研究还比较少。本文对已有的多级评分认知诊断模型进行了归纳,介绍了模型的假设,计量特征以及适用范围,为实际应用者和研究者在多级评分认知诊断模型的比较和选用上提供借鉴和参考。最后,对未来关于多级评分诊断模型的研究方向进行了展望。  相似文献   

17.
Latent trait models for responses and response times in tests often lack a substantial interpretation in terms of a cognitive process model. This is a drawback because process models are helpful in clarifying the meaning of the latent traits. In the present paper, a new model for responses and response times in tests is presented. The model is based on the proportional hazards model for competing risks. Two processes are assumed, one reflecting the increase in knowledge and the second the tendency to discontinue. The processes can be characterized by two proportional hazards models whose baseline hazard functions correspond to the temporary increase in knowledge and discouragement. The model can be calibrated with marginal maximum likelihood estimation and an application of the ECM algorithm. Two tests of model fit are proposed. The amenability of the proposed approaches to model calibration and model evaluation is demonstrated in a simulation study. Finally, the model is used for the analysis of two empirical data sets.  相似文献   

18.
This article proposes a factor-analytic model, intended for graded-response or continuous-response personality and attitude items, which includes an additional multiplicative person parameter that models the individual's response mapping process. The model, which is a modification of Spearman's (1904) factor analysis (FA) model, is parameterized as both an FA model and an item response theory (IRT) model and is fully developed to the extent that it can be used in applications. Procedures for (a) calibrating the items and assessing data fit, (b) obtaining individual estimates of both person parameters, (c) determining measurement precision, and (d) assessing differential predictability are proposed and discussed. The potential advantages of the proposal, its practical relevance, and its relations with other approaches are also discussed. Its functioning is assessed with a simulation study and 3 empirical examples in the personality domain.  相似文献   

19.
Current modeling of response times on test items has been strongly influenced by the paradigm of experimental reaction-time research in psychology. For instance, some of the models have a parameter structure that was chosen to represent a speed-accuracy tradeoff, while others equate speed directly with response time. Also, several response-time models seem to be unclear as to the level of parametrization they represent. A hierarchical framework for modeling speed and accuracy on test items is presented as an alternative to these models. The framework allows a “plug-and-play approach” with alternative choices of models for the response and response-time distributions as well as the distributions of their parameters. Bayesian treatment of the framework with Markov chain Monte Carlo (MCMC) computation facilitates the approach. Use of the framework is illustrated for the choice of a normal-ogive response model, a lognormal model for the response times, and multivariate normal models for their parameters with Gibbs sampling from the joint posterior distribution. This study received funding from the Law School Admission Council (LSAC). The opinions and conclusions contained in this paper are those of the author and do not necessarily reflect the policy and position of LSAC. The author is indebted to the American Institute of Certified Public Accountants for the data set in the empirical example and to Rinke H. Klein Entink for his computational assistance  相似文献   

20.
In low-stakes assessments, test performance has few or no consequences for examinees themselves, so that examinees may not be fully engaged when answering the items. Instead of engaging in solution behaviour, disengaged examinees might randomly guess or generate no response at all. When ignored, examinee disengagement poses a severe threat to the validity of results obtained from low-stakes assessments. Statistical modelling approaches in educational measurement have been proposed that account for non-response or for guessing, but do not consider both types of disengaged behaviour simultaneously. We bring together research on modelling examinee engagement and research on missing values and present a hierarchical latent response model for identifying and modelling the processes associated with examinee disengagement jointly with the processes associated with engaged responses. To that end, we employ a mixture model that identifies disengagement at the item-by-examinee level by assuming different data-generating processes underlying item responses and omissions, respectively, as well as response times associated with engaged and disengaged behaviour. By modelling examinee engagement with a latent response framework, the model allows assessing how examinee engagement relates to ability and speed as well as to identify items that are likely to evoke disengaged test-taking behaviour. An illustration of the model by means of an application to real data is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号