首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   57篇
  免费   0篇
  国内免费   3篇
  60篇
  2021年   2篇
  2020年   3篇
  2018年   1篇
  2017年   1篇
  2016年   2篇
  2014年   3篇
  2013年   1篇
  2009年   2篇
  2007年   2篇
  2006年   1篇
  2005年   2篇
  2004年   5篇
  2003年   3篇
  2002年   1篇
  2000年   3篇
  1999年   1篇
  1996年   3篇
  1995年   2篇
  1994年   1篇
  1993年   4篇
  1992年   2篇
  1990年   3篇
  1989年   2篇
  1988年   1篇
  1987年   1篇
  1986年   2篇
  1984年   1篇
  1983年   3篇
  1982年   1篇
  1981年   1篇
排序方式: 共有60条查询结果,搜索用时 15 毫秒
11.
We illustrate a class of multidimensional item response theory models in which the items are allowed to have different discriminating power and the latent traits are represented through a vector having a discrete distribution. We also show how the hypothesis of unidimensionality may be tested against a specific bidimensional alternative by using a likelihood ratio statistic between two nested models in this class. For this aim, we also derive an asymptotically equivalent Wald test statistic which is faster to compute. Moreover, we propose a hierarchical clustering algorithm which can be used, when the dimensionality of the latent structure is completely unknown, for dividing items into groups referred to different latent traits. The approach is illustrated through a simulation study and an application to a dataset collected within the National Assessment of Educational Progress, 1996. The author would like to thank the Editor, an Associate Editor and three anonymous referees for stimulating comments. I also thank L. Scaccia, F. Pennoni and M. Lupparelli for having done part of the simulations.  相似文献   
12.
Bayes modal estimation in item response models   总被引:1,自引:0,他引:1  
This article describes a Bayesian framework for estimation in item response models, with two-stage prior distributions on both item and examinee populations. Strategies for point and interval estimation are discussed, and a general procedure based on the EM algorithm is presented. Details are given for implementation under one-, two-, and three-parameter binary logistic IRT models. Novel features include minimally restrictive assumptions about examinee distributions and the exploitation of dependence among item parameters in a population of interest. Improved estimation in a moderately small sample is demonstrated with simulated data.This research was supported by a grant from the Spencer Foundation, Chicago, IL. Comments and suggestions on earlier drafts by Charles Lewis, Frederic Lord, Rosenbaum, James Ramsey, Hiroshi Watanabe, the editor, and two anonymous referees are gratefully acknowledged.  相似文献   
13.
The EM algorithm is a popular iterative method for estimating parameters in the latent class model where at each step the unknown parameters can be estimated simply as weighted sums of some latent proportions. The algorithm may also be used when some parameters are constrained to equal given constants or each other. It is shown that in the general case with equality constraints, the EM algorithm is not simple to apply because a nonlinear equation has to be solved. This problem arises, mainly, when equality constrints are defined over probabilities indifferent combinations of variables and latent classes. A simple condition is given in which, although probabilities in different variable-latent class combinations are constrained to be equal, the EM algorithm is still simple to apply.The authors are grateful to the Editor and the anonymous reviewers for their helpful comments on an earlier draft of this paper. C. C. Clogg and R. Luijkx are also acknowledged for verifying our results with their computer programs MLLSA and LCAG, respectively.  相似文献   
14.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.This work was partially supported by the Program Statistics Research Project at Educational Testing Service.  相似文献   
15.
Consider the class of two parameter marginal logistic (Rasch) models, for a test ofm True-False items, where the latent ability is assumed to be bounded. Using results of Karlin and Studen, we show that this class of nonparametric marginal logistic (NML) models is equivalent to the class of marginal logistic models where the latent ability assumes at most (m + 2)/2 values. This equivalence has two implications. First, estimation for the NML model is accomplished by estimating the parameters of a discrete marginal logistic model. Second, consistency for the maximum likelihood estimates of the NML model can be shown (whenm is odd) using the results of Kiefer and Wolfowitz. An example is presented which demonstrates the estimation strategy and contrasts the NML model with a normal marginal logistic model.This research was supported by NIMH traning grant, 2 T32 MH 15758-06 and by ONR contract N00014-84-K-0588. The author would like to thank Diane Lambert, John Rolph, and Stephen Fienberg for their assistance. Also, the comments of the referees helped to substantially improve the final version of this paper.  相似文献   
16.
A weighted Euclidean distance model for analyzing three-way proximity data is proposed that incorporates a latent class approach. In this latent class weighted Euclidean model, the contribution to the distance function between two stimuli is per dimension weighted identically by all subjects in the same latent class. This model removes the rotational invariance of the classical multidimensional scaling model retaining psychologically meaningful dimensions, and drastically reduces the number of parameters in the traditional INDSCAL model. The probability density function for the data of a subject is posited to be a finite mixture of spherical multivariate normal densities. The maximum likelihood function is optimized by means of an EM algorithm; a modified Fisher scoring method is used to update the parameters in the M-step. A model selection strategy is proposed and illustrated on both real and artificial data.The second author is supported as Bevoegdverklaard Navorser of the Belgian Nationaal Fonds voor Wetenschappelijk Onderzoek.  相似文献   
17.
A multidimensional unfolding model is developed that assumes that the subjects can be clustered into a small number of homogeneous groups or classes. The subjects that belong to the same group are represented by a single ideal point. Since it is not known in advance to which group of class a subject belongs, a mixture distribution model is formulated that can be considered as a latent class model for continuous single stimulus preference ratings. A GEM algorithm is described for estimating the parameters in the model. The M-step of the algorithm is based on a majorization procedure for updating the estimates of the spatial model parameters. A strategy for selecting the appropriate number of classes and the appropriate number of dimensions is proposed and fully illustrated on some artificial data. The latent class unfolding model is applied to political science data concerning party preferences from members of the Dutch Parliament. Finally, some possible extensions of the model are discussed.The first author is supported as Bevoegdverklaard Navorser of the Belgian Nationaal Fonds voor Wetenschappelijk Onderzoek. Part of this paper was presented at the Distancia meeting held in Rennes, France, June 1992.  相似文献   
18.
In this paper, we explore the use of the stochastic EM algorithm (Celeux & Diebolt (1985) Computational Statistics Quarterly, 2, 73) for large-scale full-information item factor analysis. Innovations have been made on its implementation, including an adaptive-rejection-based Gibbs sampler for the stochastic E step, a proximal gradient descent algorithm for the optimization in the M step, and diagnostic procedures for determining the burn-in size and the stopping of the algorithm. These developments are based on the theoretical results of Nielsen (2000, Bernoulli, 6, 457), as well as advanced sampling and optimization techniques. The proposed algorithm is computationally efficient and virtually tuning-free, making it scalable to large-scale data with many latent traits (e.g. more than five latent traits) and easy to use for practitioners. Standard errors of parameter estimation are also obtained based on the missing-information identity (Louis, 1982, Journal of the Royal Statistical Society, Series B, 44, 226). The performance of the algorithm is evaluated through simulation studies and an application to the analysis of the IPIP-NEO personality inventory. Extensions of the proposed algorithm to other latent variable models are discussed.  相似文献   
19.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   
20.
The full information item factor (FIIF) model is very useful for analyzing relations of dichotomous variables. In this article, we present a feasible procedure to assess local influence of minor perturbations for identifying influence aspects of the FIIF model. The development is based on a Q-displacement function which is closely related with the Monte Carlo EM algorithm in the ML estimation. In the E-step of this algorithm, the conditional expectations are approximated by sample means of observations simulated by the Gibbs sampler from the appropriate conditional distributions. It turns out that these observations can be utilized for computing the building blocks of the proposed diagnostic measures. The diagnoses are based on the conformal normal curvature that can be computed easily. A number of interesting perturbation schemes are considered. The methodology is illustrated with two real examples.The research is fully supported by a grant (CUHK 4356/00H) from the Research Grant Council of the Hong Kong Special Administration Region. The authors are thankful to the Editor, Associate Editor, anonymous reviewers, and W.Y. Poon for valuable comments for improving the paper, and to ICPSR and the relevant founding agency for allowing us to use of their data. The assistance of Michael Leung and Esther Tam is gratefully acknowledged.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号