首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Measurement invariance is a fundamental assumption in item response theory models, where the relationship between a latent construct (ability) and observed item responses is of interest. Violation of this assumption would render the scale misinterpreted or cause systematic bias against certain groups of persons. While a number of methods have been proposed to detect measurement invariance violations, they typically require advance definition of problematic item parameters and respondent grouping information. However, these pieces of information are typically unknown in practice. As an alternative, this paper focuses on a family of recently proposed tests based on stochastic processes of casewise derivatives of the likelihood function (i.e., scores). These score-based tests only require estimation of the null model (when measurement invariance is assumed to hold), and they have been previously applied in factor-analytic, continuous data contexts as well as in models of the Rasch family. In this paper, we aim to extend these tests to two-parameter item response models, with strong emphasis on pairwise maximum likelihood. The tests’ theoretical background and implementation are detailed, and the tests’ abilities to identify problematic item parameters are studied via simulation. An empirical example illustrating the tests’ use in practice is also provided.  相似文献   

2.
3.
Retrieving information from memory improves recall accuracy more than continued studying, but this testing effect often only becomes visible over time. In contrast, the present study documents testing effects on recall speed both immediately after practice and after a delay. A total of 40 participants learned the translation of 100 Swahili words and then further restudied the words with translations or retrieved the translations from memory during testing. As in previous experiments, recall accuracy was higher for restudied words than for tested words immediately after practice, but higher for tested words after 7 days. Response times for correct answers, however, showed a different result: Learners were faster to recall tested words than restudied words both immediately after practice and after 7 days. These results are interpreted in light of recent suggestions that testing selectively strengthens cue–response associations. An additional outcome was that testing effects on recall accuracy were related to perceived retrieval success during practice. When several practice retrievals were successful, testing effects on recall accuracy were already significant immediately after practice. Together with the reaction time data, this supports recent models that attribute changes in testing effects over time to limited item retrievability during practice.  相似文献   

4.
Utilizing technology for automated item generation is not a new idea. However, test items used in commercial testing programs or in research are still predominantly written by humans, in most cases by content experts or professional item writers. Human experts are a limited resource and testing agencies incur high costs in the process of continuous renewal of item banks to sustain testing programs. Using algorithms instead holds the promise of providing unlimited resources for this crucial part of assessment development. The approach presented here deviates in several ways from previous attempts to solve this problem. In the past, automatic item generation relied either on generating clones of narrowly defined item types such as those found in language free intelligence tests (e.g., Raven’s progressive matrices) or on an extensive analysis of task components and derivation of schemata to produce items with pre-specified variability that are hoped to have predictable levels of difficulty. It is somewhat unlikely that researchers utilizing these previous approaches would look at the proposed approach with favor; however, recent applications of machine learning show success in solving tasks that seemed impossible for machines not too long ago. The proposed approach uses deep learning to implement probabilistic language models, not unlike what Google brain and Amazon Alexa use for language processing and generation.  相似文献   

5.
Marginal maximum‐likelihood procedures for parameter estimation and testing the fit of a hierarchical model for speed and accuracy on test items are presented. The model is a composition of two first‐level models for dichotomous responses and response times along with multivariate normal models for their item and person parameters. It is shown how the item parameters can easily be estimated using Fisher's identity. To test the fit of the model, Lagrange multiplier tests of the assumptions of subpopulation invariance of the item parameters (i.e., no differential item functioning), the shape of the response functions, and three different types of conditional independence were derived. Simulation studies were used to show the feasibility of the estimation and testing procedures and to estimate the power and Type I error rate of the latter. In addition, the procedures were applied to an empirical data set from a computerized adaptive test of language comprehension.  相似文献   

6.
基于计算机形式的测验使得收集作答反应时信息成为可能,这些信息的有效利用对心理与教育测验的理论研究和实际应用产生了重大影响。首先,归纳并总结了测验中使用反应时信息的五大优势。其次,分别介绍了4种不同取向下较典型的反应时模型与模型特征,并分别进行评价。再次,较系统地梳理了反应时模型在实践中的应用,使读者了解反应时信息在测验中所发挥的作用。最后,探讨了未来将反应时应用于心理与教育测量领域的几个研究方向。  相似文献   

7.
Three classes of polytomous IRT models are distinguished. These classes are the adjacent category models, the cumulative probability models, and the continuation ratio models. So far, the latter class has received relatively little attention. The class of continuation ratio models includes logistic models, such as the sequential model (Tutz, 1990), and nonlogistic models, such as the acceleration model (Samejima, 1995) and the nonparametric sequential model (Hemker, 1996). Four measurement properties are discussed. These are monotone likelihood ratio of the total score, stochastic ordering of the latent trait by the total score, stochastic ordering of the total score by the latent trait, and invariant item ordering. These properties have been investigated previously for the adjacent category models and the cumulative probability models, and for the continuation ratio models this is done here. It is shown that stochastic ordering of the total score by the latent trait is implied by all continuation ratio models, while monotone likelihood ratio of the total score and stochastic ordering on the latent trait by the total score are not implied by any of the continuation ratio models. Only the sequential rating scale model implies the property of invariant item ordering. Also, we present a Venn-diagram showing the relationships between all known polytomous IRT models from all three classes.  相似文献   

8.
Two experiments examined item recognition memory for sequentially presented odours. Following a sequence of six odours participants were immediately presented with a series of two-alternative forced-choice (2AFC) test odours. The test pairs were presented in either the same order as learning or the reverse order of learning. Method of testing was either blocked (Experiment 1) or mixed (Experiment 2). Both experiments demonstrated extended recency, with an absence of primacy, for the reverse testing procedure. In contrast, the forward testing procedure revealed a null effect of serial position. The finding of extended recency is inconsistent with the single-item recency predicted by the two-component duplex theory (Phillips & Christie, 1977). We offer an alternative account of the data in which recognition accuracy is better accommodated by the cumulative number of items presented between item learning and item test.  相似文献   

9.
A real-data simulation of computerized adaptive testing (CAT) is an important step in real-life CAT applications. Such a simulation allows CAT developers to evaluate important features of the CAT system, such as item selection and stopping rules, before live testing. SIMPOLYCAT, an SAS macro program, was created by the authors to conduct real-data CAT simulations based on polytomous item response theory (IRT) models. In SIMPOLYCAT, item responses can be input from an external file or generated internally on the basis of item parameters provided by users. The program allows users to choose among methods of setting initial ?, approaches to item selection, trait estimators, CAT stopping criteria, polytomous IRT models, and other CAT parameters. In addition, CAT simulation results can be saved easily and used for further study. The purpose of this article is to introduce SIMPOLYCAT, briefly describe the program algorithm and parameters, and provide examples of CAT simulations, using generated and real data. Visual comparisons of the results obtained from the CAT simulations are presented.  相似文献   

10.
Nonparametric tests for testing the validity of polytomous ISOP-models (unidimensional ordinal probabilistic polytomous IRT-models) are presented. Since the ISOP-model is a very general nonparametric unidimensional rating scale model the test statistics apply to a great multitude of latent trait models. A test for the comonotonicity of item sets of two or more items is suggested. Procedures for testing the comonotonicity of two item sets and for item selection are developed. The tests are based on Goodman-Kruskal's gamma index of ordinal association and are generalizations thereof. It is an essential advantage of polytomous ISOP-models within probabilistic IRT-models that the tests of validity of the model can be performed before and without the model being fitted to the data. The new test statistics have the further advantage that no prior order of items or subjects needs to be known.  相似文献   

11.
Cluster Analysis for Cognitive Diagnosis: Theory and Applications   总被引:3,自引:0,他引:3  
Latent class models for cognitive diagnosis often begin with specification of a matrix that indicates which attributes or skills are needed for each item. Then by imposing restrictions that take this into account, along with a theory governing how subjects interact with items, parametric formulations of item response functions are derived and fitted. Cluster analysis provides an alternative approach that does not require specifying an item response model, but does require an item-by-attribute matrix. After summarizing the data with a particular vector of sum-scores, K-means cluster analysis or hierarchical agglomerative cluster analysis can be applied with the purpose of clustering subjects who possess the same skills. Asymptotic classification accuracy results are given, along with simulations comparing effects of test length and method of clustering. An application to a language examination is provided to illustrate how the methods can be implemented in practice.  相似文献   

12.
Nested logit item response models for multiple-choice data are presented. Relative to previous models, the new models are suggested to provide a better approximation to multiple-choice items where the application of a solution strategy precedes consideration of response options. In practice, the models also accommodate collapsibility across all distractor categories, making it easier to allow decisions about including distractor information to occur on an item-by-item or application-by-application basis without altering the statistical form of the correct response curves. Marginal maximum likelihood estimation algorithms for the models are presented along with simulation and real data analyses.  相似文献   

13.
Generating items during testing: Psychometric issues and models   总被引:2,自引:0,他引:2  
On-line item generation is becoming increasingly feasible for many cognitive tests. Item generation seemingly conflicts with the well established principle of measuring persons from items with known psychometric properties. This paper examines psychometric principles and models required for measurement from on-line item generation. Three psychometric issues are elaborated for item generation. First, design principles to generate items are considered. A cognitive design system approach is elaborated and then illustrated with an application to a test of abstract reasoning. Second, psychometric models for calibrating generating principles, rather than specific items, are required. Existing item response theory (IRT) models are reviewed and a new IRT model that includes the impact on item discrimination, as well as difficulty, is developed. Third, the impact of item parameter uncertainty on person estimates is considered. Results from both fixed content and adaptive testing are presented.This article is based on the Presidential Address Susan E. Embretson gave on June 26, 1999 at the 1999 Annual Meeting of the Psychometric Society held at the University of Kansas in Lawrence, Kansas. —Editor  相似文献   

14.
尽管多阶段测验(MST)在保持自适应测验优点的同时允许测验编制者按照一定的约束条件去建构每一个模块和题板,但建构测验时若因忽视某些潜在的因素而导致题目之间出现局部题目依赖性(LID)时,也会对MST测验结果带来一定的危害。为探究"LID对MST的危害"这一问题,本研究首先介绍了MST和LID等相关概念;然后通过模拟研究比较探讨该问题,结果表明LID的存在会影响被试能力估计的精度但仍为估计偏差较小,且该危害不限于某一特定的路由规则;之后为消除该危害,使用了题组反应模型作为MST施测过程中的分析模型,结果表明尽管该方法能够消除部分危害但效果有限。这一方面表明LID对MST中被试能力估计精度所带来的危害确实值得关注,另一方面也表明在今后关于如何消除MST中由LID造成危害的方法仍值得进一步探究的。  相似文献   

15.
Abstract

Differential item functioning (DIF) is a pernicious statistical issue that can mask true group differences on a target latent construct. A considerable amount of research has focused on evaluating methods for testing DIF, such as using likelihood ratio tests in item response theory (IRT). Most of this research has focused on the asymptotic properties of DIF testing, in part because many latent variable methods require large samples to obtain stable parameter estimates. Much less research has evaluated these methods in small sample sizes despite the fact that many social and behavioral scientists frequently encounter small samples in practice. In this article, we examine the extent to which model complexity—the number of model parameters estimated simultaneously—affects the recovery of DIF in small samples. We compare three models that vary in complexity: logistic regression with sum scores, the 1-parameter logistic IRT model, and the 2-parameter logistic IRT model. We expected that logistic regression with sum scores and the 1-parameter logistic IRT model would more accurately estimate DIF because these models yielded more stable estimates despite being misspecified. Indeed, a simulation study and empirical example of adolescent substance use show that, even when data are generated from / assumed to be a 2-parameter logistic IRT, using parsimonious models in small samples leads to more powerful tests of DIF while adequately controlling for Type I error. We also provide evidence for minimum sample sizes needed to detect DIF, and we evaluate whether applying corrections for multiple testing is advisable. Finally, we provide recommendations for applied researchers who conduct DIF analyses in small samples.  相似文献   

16.
We designed this study to evaluate several data collection and equating designs in the context of item response theory (IRT) equating. The random‐groups design and the common‐item design have been widely used for collecting data for IRT equating. In this study, we investigated four equating methods based upon these two data collection designs, using empirical data from a number of different testing programs. When the randomly equivalent group assumption was reasonably met, the four equating methods tended to produce highly comparable results. On the other hand, equating methods based upon either of the equating designs produced dissimilar results. Sample size can have differential effects on the equating results produced by the different equating methods. In practice, a common‐item equivalent‐groups design often produces unacceptably large differences in the group mean due to various anomalies such as context effects, poor quality of common items, or a very small number of common items. In such cases, a random‐groups design would produce more stable equating results.  相似文献   

17.
In the mirror effect, there are fewer false negatives (misses) and false positives (false alarms) for rare (low-frequency) words than for common (high-frequency) words. In the spacing effect, recognition accuracy is positively related to the interval (spacing or lag) between two presentations of an item. These effects are related in that they are both manifestations of a leapfrog effect (a weaker item jumps over a stronger item). They seem to be puzzles for traditional strength theory and at least some current global-matching models. A computational strength-based model (EICL) is proposed that incorporates excitation, inhibition, and a closed-loop learning algorithm. The model consists of three nonlinear coupled stochastic difference equations, one each for excitation (x), inhibition (y), and context (z). Strength is the algebraic sum (i.e., s = x − y + z). These equations are used to form a toy lexicon that serves as a basis for the experimental manipulations. The model can simulate the mirror effect forcedchoice inequalities and the spacing effect for single-item recognition, all parameters are random variables, and the same parameter values are used for both the mirror and the spacing effects. No parameter values varied with the independent variables (word frequency for the mirror effect, lag for the spacing effect), so the model, not the parameters, is doing the work.  相似文献   

18.
It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters make sense theoretically, and that in practice the random item approach is promising to handle several issues, such as the measurement of persons, the explanation of item difficulties, and trouble shooting with respect to DIF. In correspondence with these issues, three parts are included. All three rely on the Rasch model as the simplest model to study, and the same data set is used for all applications. First, it is shown that the Rasch model with fixed persons and random items is an interesting measurement model, both, in theory, and for its goodness of fit. Second, the linear logistic test model with an error term is introduced, so that the explanation of the item difficulties based on the item properties does not need to be perfect. Finally, two more models are presented: the random item profile model (RIP) and the random item mixture model (RIM). In the RIP, DIF is not considered a discrete phenomenon, and when a robust regression approach based on the RIP difficulties is applied, quite good DIF identification results are obtained. In the RIM, no prior anchor sets are defined, but instead a latent DIF class of items is used, so that posterior anchoring is realized (anchoring based on the item mixture). It is shown that both approaches are promising for the identification of DIF.  相似文献   

19.
The use of multilevel modeling is presented as an alternative to separate item and subject ANOVAs (F1 x F2) in psycholinguistic research. Multilevel modeling is commonly utilized to model variability arising from the nesting of lower level observations within higher level units (e.g., students within schools, repeated measures within individuals). However, multilevel models can also be used when two random factors are crossed at the same level, rather than nested. The current work illustrates the use of the multilevel model for crossed random effects within the context of a psycholinguistic experimental study, in which both subjects and items are modeled as random effects within the same analysis, thus avoiding some of the problems plaguing current approaches.  相似文献   

20.
We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)—namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the “recycling” of VWM representations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号