首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article analyzes latent variable models from a cognitive psychology perspective. We start by discussing work by Tuerlinckx and De Boeck (2005), who proved that a diffusion model for 2-choice response processes entails a 2-parameter logistic item response theory (IRT) model for individual differences in the response data. Following this line of reasoning, we discuss the appropriateness of IRT for measuring abilities and bipolar traits, such as pro versus contra attitudes. Surprisingly, if a diffusion model underlies the response processes, IRT models are appropriate for bipolar traits but not for ability tests. A reconsideration of the concept of ability that is appropriate for such situations leads to a new item response model for accuracy and speed based on the idea that ability has a natural zero point. The model implies fundamentally new ways to think about guessing, response speed, and person fit in IRT. We discuss the relation between this model and existing models as well as implications for psychology and psychometrics.  相似文献   

2.
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.  相似文献   

3.
项目反应理论(IRT)模型依据项目与被试的特征预测被试的作答表现, 是常用的心理测量模型。但IRT的有效运用依赖于所选用IRT模型与实际数据资料相符合的程度(即模型?资料拟合度, goodness of fit)。只有当所采用IRT分析模型与实际数据资料拟合较好时, IRT的优点和功能才能真正发挥出来(Orlando & Thissen, 2000)。而当所采用IRT模型与资料不拟合或选择了错误的模型, 则会导致如参数估计、测验等值及项目功能差异分析等具有较大误差(Kang, Cohen & Sung, 2009), 给实际工作带来不良影响。因此, 在使用IRT分析时, 应首先充分考察及检验所选用模型与实际数据是否相匹配/相拟合(McKinley & Mills, 1985)。IRT领域中常用模型?资料拟合检验统计量可从项目拟合、测验拟合两个角度进行阐述并比较, 这是心理、教育测量领域的重要主题, 也是测验分析过程中较易忽视的环节, 目前还未见此类公开发表的文章。未来的研究可以在各统计量的实证比较研究以及在认知诊断领域的拓展方面有所发展。  相似文献   

4.
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic classification models (DCMs). DCMs are a newer class of psychometric models that are designed to classify examinees according to levels of categorical latent traits. We examined the invariance property for general DCMs using the log-linear cognitive diagnosis model (LCDM) framework. We conducted a simulation study to examine the degree to which theoretical invariance of LCDM classifications and item parameter estimates can be observed under various sample and test characteristics. Results illustrated that LCDM classifications and item parameter estimates show clear invariance when adequate model data fit is present. To demonstrate the implications of this important property, we conducted additional analyses to show that using pre-calibrated tests to classify examinees provided consistent classifications across calibration samples with varying mastery profile distributions and across tests with varying difficulties.  相似文献   

5.
An instrument's sensitivity to detect individual-level change is an important consideration for both psychometric and clinical researchers. In this article, we develop a cognitive problems measure and evaluate its sensitivity to detect change from an item response theory (IRT) perspective. After illustrating assumption checking and model fit assessment, we detail 4 features of IRT modeling: (a) the scale information curve and its relation to the bandwidth of measurement precision, (b) the scale response curve and how it is used to link the latent trait metric with the raw score metric, (c) content-based versus norm-based score referencing, and (d) the level of measurement of the latent trait scale. We conclude that IRT offers an informative, alternative framework for understanding an instrument's psychometric properties and recommend that IRT analyses be considered prior to investigations of change, growth, or the effectiveness of clinical interventions.  相似文献   

6.
In Item Response Theory (IRT), item characteristic curves (ICCs) are illustrated through logistic models or normal ogive models, and the probability that examinees give the correct answer is usually a monotonically increasing function of their ability parameters. However, since only limited patterns of shapes can be obtained from logistic models or normal ogive models, there is a possibility that the model applied does not fit the data. As a result, the existing method can be rejected because it cannot deal with various item response patterns. To overcome these problems, we propose a new semiparametric IRT model using a Dirichlet process mixture logistic distribution. Our method does not rely on assumptions but only requires that the ICCs be a monotonically nondecreasing function; that is, our method can deal with more types of item response patterns than the existing methods, such as the one-parameter normal ogive models or the two- or three-parameter logistic models.  相似文献   

7.
Generating items during testing: Psychometric issues and models   总被引:2,自引:0,他引:2  
On-line item generation is becoming increasingly feasible for many cognitive tests. Item generation seemingly conflicts with the well established principle of measuring persons from items with known psychometric properties. This paper examines psychometric principles and models required for measurement from on-line item generation. Three psychometric issues are elaborated for item generation. First, design principles to generate items are considered. A cognitive design system approach is elaborated and then illustrated with an application to a test of abstract reasoning. Second, psychometric models for calibrating generating principles, rather than specific items, are required. Existing item response theory (IRT) models are reviewed and a new IRT model that includes the impact on item discrimination, as well as difficulty, is developed. Third, the impact of item parameter uncertainty on person estimates is considered. Results from both fixed content and adaptive testing are presented.This article is based on the Presidential Address Susan E. Embretson gave on June 26, 1999 at the 1999 Annual Meeting of the Psychometric Society held at the University of Kansas in Lawrence, Kansas. —Editor  相似文献   

8.
Person-fit statistics have been proposed to investigate the fit of an item score pattern to an item response theory (IRT) model. The author investigated how these statistics can be used to detect different types of misfit. Intelligence test data were analyzed using person-fit statistics in the context of the G. Rasch (1960) model and R. J. Mokken's (1971, 1997) IRT models. The effect of the choice of an IRT model to detect misfitting item score patterns and the usefulness of person-fit statisticsfor diagnosis of misfit are discussed. Results showed that different types of person-fit statistics can be used to detect different kinds of person misfit. Parametric person-fit statistics had more power than nonparametric person-fit statistics.  相似文献   

9.
杨向东 《心理科学进展》2010,18(8):1349-1358
从测验项目解决的认知过程的视角分析了在不同测验理论框架下的测量模型中的基本假设, 指出测量模型是测验开发者有关测验项目反应机制的理论假设的具体表征, 是系统检验测量假设和过程的统计框架。然而, 不管是经典测验理论、概化理论, 还是早期的项目反应理论模型, 相关假设都过于简化, 缺少相应实质理论的支持。与之相比, 认知测量模型强调与个体在测验项目反应过程中的认知过程、认知策略和知识结构的对应性, 提供了在实质理论基础上界定测量建构、设计测验项目、进行建模分析和解释的可能性, 为日益边缘化的心理测量学和主流心理学研究的融合奠定了基础。  相似文献   

10.
Item responses that do not fit an item response theory (IRT) model may cause the latent trait value to be inaccurately estimated. In the past two decades several statistics have been proposed that can be used to identify nonfitting item score patterns. These statistics all yieldscalar values. Here, the use of the person response function (PRF) for identifying nonfitting item score patterns was investigated. The PRF is afunction and can be used for diagnostic purposes. First, the PRF is defined in a class of IRT models that imply an invariant item ordering. Second, a person-fit method proposed by Trabin & Weiss (1983) is reformulated in a nonparametric IRT context assuming invariant item ordering, and statistical theory proposed by Rosenbaum (1987a) is adapted to test locally whether a PRF is nonincreasing. Third, a simulation study was conducted to compare the use of the PRF with the person-fit statistic ZU3. It is concluded that the PRF can be used as a diagnostic tool in person-fit research.The authors are grateful to Coen A. Bernaards for preparing the figures used in this article, and to Wilco H.M. Emons for checking the calculations.  相似文献   

11.
12.
Log-Multiplicative Association Models as Item Response Models   总被引:1,自引:0,他引:1  
Log-multiplicative association (LMA) models, which are special cases of log-linear models, have interpretations in terms of latent continuous variables. Two theoretical derivations of LMA models based on item response theory (IRT) arguments are presented. First, we show that Anderson and colleagues (Anderson &; Vermunt, 2000; Anderson &; Böckenholt, 2000; Anderson, 2002), who derived LMA models from statistical graphical models, made the equivalent assumptions as Holland (1990) when deriving models for the manifest probabilities of response patterns based on an IRT approach. We also present a second derivation of LMA models where item response functions are specified as functions of rest-scores. These various connections provide insights into the behavior of LMA models as item response models and point out philosophical issues with the use of LMA models as item response models. We show that even for short tests, LMA and standard IRT models yield very similar to nearly identical results when data arise from standard IRT models. Log-multiplicative association models can be used as item response models and do not require numerical integration for estimation.  相似文献   

13.
Abstract

Differential item functioning (DIF) is a pernicious statistical issue that can mask true group differences on a target latent construct. A considerable amount of research has focused on evaluating methods for testing DIF, such as using likelihood ratio tests in item response theory (IRT). Most of this research has focused on the asymptotic properties of DIF testing, in part because many latent variable methods require large samples to obtain stable parameter estimates. Much less research has evaluated these methods in small sample sizes despite the fact that many social and behavioral scientists frequently encounter small samples in practice. In this article, we examine the extent to which model complexity—the number of model parameters estimated simultaneously—affects the recovery of DIF in small samples. We compare three models that vary in complexity: logistic regression with sum scores, the 1-parameter logistic IRT model, and the 2-parameter logistic IRT model. We expected that logistic regression with sum scores and the 1-parameter logistic IRT model would more accurately estimate DIF because these models yielded more stable estimates despite being misspecified. Indeed, a simulation study and empirical example of adolescent substance use show that, even when data are generated from / assumed to be a 2-parameter logistic IRT, using parsimonious models in small samples leads to more powerful tests of DIF while adequately controlling for Type I error. We also provide evidence for minimum sample sizes needed to detect DIF, and we evaluate whether applying corrections for multiple testing is advisable. Finally, we provide recommendations for applied researchers who conduct DIF analyses in small samples.  相似文献   

14.
解释性项目反应理论模型(Explanatory Item Response Theory Models, EIRTM)是指基于广义线性混合模型和非线性混合模型构建的项目反应理论(Item Response Theory, IRT)模型。EIRTM能在IRT模型的基础上直接加入预测变量, 从而解决各类测量问题。首先介绍EIRTM的相关概念和参数估计方法, 然后展示如何使用EIRTM处理题目位置效应、测验模式效应、题目功能差异、局部被试依赖和局部题目依赖, 接着提供实例对EIRTM的使用进行说明, 最后对EIRTM的不足之处和应用前景进行讨论。  相似文献   

15.
We offer a new theoretical angle for cognitive arithmetic, which is that evidence accumulation may play a role in problem plausibility decisions. We build upon previous studies that have considered such a hypothesis, and here formally evaluate the paradigm. We develop the finding that performance differences, due to variations in strategy use and aging effects, can indeed be reasonably explained through these accumulation-to-bound cognitive models. Results suggest that these models may be effectively used to learn more about the underlying cognitive processes. In this study, we modelled young (18–24) and older (68–82) adults’ solution times in performing arithmetic verification (e.g. whether 8?×?5?=?41 is true/false). The domain-relevant factors in strategy use (problem-verification heuristics) and aging differences (older/younger adult groups) were analyzed by a response process model of the latency data, that is fit by participant and item. Lower thresholds accounted for the faster response times (RTs) for problems solved with heuristics (arithmetic rule-violation checking strategies), as opposed to problems solved by calculation approaches. A more rapid accumulation accounted for faster RTs on problems in which two arithmetic rules were violated (strategy combination) rather than one. Third, higher thresholds (i.e. preferring to have greater certainty before responding) accounted for older adults’ slower speed. These findings are in support of accumulation models being relevant for more complex cognitive tasks, as well as to account for the age-related differences therein.  相似文献   

16.
Using Louis’ formula, it is possible to obtain the observed information matrix and the corresponding large-sample standard error estimates after the expectation–maximization (EM) algorithm has converged. However, Louis’ formula is commonly de-emphasized due to its relatively complex integration representation, particularly when studying latent variable models. This paper provides a holistic overview that demonstrates how Louis’ formula can be applied efficiently to item response theory (IRT) models and other popular latent variable models, such as cognitive diagnostic models (CDMs). After presenting the algebraic components required for Louis’ formula, two real data analyses, with accompanying numerical illustrations, are presented. Next, a Monte Carlo simulation is presented to compare the computational efficiency of Louis’ formula with previously existing methods. Results from these presentations suggest that Louis’ formula should be adopted as a standard method when computing the observed information matrix for IRT models and CDMs fitted with the EM algorithm due to its computational efficiency and flexibility.  相似文献   

17.
The authors discuss the applicability of nonparametric item response theory (IRT) models to the construction and psychometric analysis of personality and psychopathology scales, and they contrast these models with parametric IRT models. They describe the fit of nonparametric IRT to the Depression content scale of the Minnesota Multiphasic Personality Inventory--2 (J. N. Butcher, W. G. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989). They also show how nonparametric IRT models can easily be applied and how misleading results from parametric IRT models can be avoided. They recommend the use of nonparametric IRT modeling prior to using parametric logistic models when investigating personality data.  相似文献   

18.
詹沛达  陈平  边玉芳 《心理学报》2016,48(10):1347-1356
随着人们对测验反馈结果精细化的需求逐渐提高, 具有认知诊断功能的测量方法逐渐受到人们的关注。在认知诊断模型(CDMs)闪耀着光芒的同时, 另一类能够在连续量尺上提供精细反馈的多维IRT模型(MIRTMs)似乎受到些许冷落。为探究MIRTMs潜在的认知诊断功能, 本文以补偿模型为视角, 聚焦于分别属于MIRTMs的多维两参数logistic模型(M2PLM)和属于CDMs的线性logistic模型(LLM); 之后为使两者具有可比性, 可对补偿M2PLM引入验证性矩阵(Q矩阵)来界定题目与维度之间的关系, 进而得到验证性的补偿M2PLM (CC-M2PLM), 并通过把潜在特质按切点划分为跨界属性, 以期使CC-M2PLM展现出其本应具有的认知诊断功能; 预研究表明logistic量尺上的0点可作为相对合理的切点; 然后, 通过模拟研究对比探究CC-M2PLM和LLM的认知诊断功能, 结果表明CC-M2PLM可用于分析诊断测验数据, 且认知诊断功能与直接使用LLM的效果相当; 最后, 以两则实证数据为例来说明CC-M2PLM在实际诊断测验分析中的可行性。  相似文献   

19.
Abstract

In this paper, we apply Vuong’s general approach of model selection to the comparison of nested and non-nested unidimensional and multidimensional item response theory (IRT) models. Vuong’s approach of model selection is useful because it allows for formal statistical tests of both nested and non-nested models. However, only the test of non-nested models has been applied in the context of IRT models to date. After summarizing the statistical theory underlying the tests, we investigate the performance of all three distinct Vuong tests in the context of IRT models using simulation studies and real data. In the non-nested case we observed that the tests can reliably distinguish between the graded response model and the generalized partial credit model. In the nested case, we observed that the tests typically perform as well as or sometimes better than the traditional likelihood ratio test. Based on these results, we argue that Vuong’s approach provides a useful set of tools for researchers and practitioners to effectively compare competing nested and non-nested IRT models.  相似文献   

20.
The use of multidimensional forced-choice (MFC) items to assess non-cognitive traits such as personality, interests and values in psychological tests has a long history, because MFC items show strengths in preventing response bias. Recently, there has been a surge of interest in developing item response theory (IRT) models for MFC items. However, nearly all of the existing IRT models have been developed for MFC items with binary scores. Real tests use MFC items with more than two categories; such items are more informative than their binary counterparts. This study developed a new IRT model for polytomous MFC items based on the cognitive model of choice, which describes the cognitive processes underlying humans' preferential choice behaviours. The new model is unique in its ability to account for the ipsative nature of polytomous MFC items, to assess individual psychological differentiation in interests, values and emotions, and to compare the differentiation levels of latent traits between individuals. Simulation studies were conducted to examine the parameter recovery of the new model with existing computer programs. The results showed that both statement parameters and person parameters were well recovered when the sample size was sufficient. The more complete the linking of the statements was, the more accurate the parameter estimation was. This paper provides an empirical example of a career interest test using four-category MFC items. Although some aspects of the model (e.g., the nature of the person parameters) require additional validation, our approach appears promising.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号