首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   238篇
  免费   4篇
  国内免费   21篇
  2023年   4篇
  2022年   5篇
  2021年   7篇
  2020年   7篇
  2019年   4篇
  2018年   8篇
  2017年   8篇
  2016年   8篇
  2015年   5篇
  2014年   4篇
  2013年   18篇
  2012年   5篇
  2011年   5篇
  2010年   5篇
  2009年   7篇
  2008年   6篇
  2007年   3篇
  2006年   8篇
  2005年   11篇
  2004年   4篇
  2003年   1篇
  2002年   7篇
  2001年   5篇
  2000年   6篇
  1999年   1篇
  1998年   6篇
  1997年   6篇
  1996年   6篇
  1995年   3篇
  1994年   5篇
  1993年   5篇
  1992年   8篇
  1991年   7篇
  1990年   7篇
  1989年   9篇
  1988年   3篇
  1987年   7篇
  1986年   6篇
  1985年   7篇
  1984年   6篇
  1983年   5篇
  1982年   1篇
  1981年   2篇
  1980年   1篇
  1979年   4篇
  1978年   6篇
  1977年   1篇
排序方式: 共有263条查询结果,搜索用时 31 毫秒
51.
In a broad class of item response theory (IRT) models for dichotomous items the unweighted total score has monotone likelihood ratio (MLR) in the latent trait. In this study, it is shown that for polytomous items MLR holds for the partial credit model and a trivial generalization of this model. MLR does not necessarily hold if the slopes of the item step response functions vary over items, item steps, or both. MLR holds neither for Samejima's graded response model, nor for nonparametric versions of these three polytomous models. These results are surprising in the context of Grayson's and Huynh's results on MLR for nonparametric dichotomous IRT models, and suggest that establishing stochastic ordering properties for nonparametric polytomous IRT models will be much harder.Hemker's research was supported by the Netherlands Research Council, Grant 575-67-034. Junker's research was supported in part by the National Institutes of Health, Grant CA54852, and by the National Science Foundation, Grant DMS-94.04438.  相似文献   
52.
Gert Storms 《Psychometrika》1995,60(2):247-258
A Monte Carlo study was conducted to investigate the robustness of the assumed error distribution in maximum likelihood estimation models for multidimensional scaling. Data sets generated according to the lognormal, the normal, and the rectangular distribution were analysed with the log-normal error model in Ramsay's MULTISCALE program package. The results show that violations of the assumed error distribution have virtually no effect on the estimated distance parameters. In a comparison among several dimensionality tests, the corrected version of thex 2 test, as proposed by Ramsay, yielded the best results, and turned out to be quite robust against violations of the error model.  相似文献   
53.
Yutaka Kano 《Psychometrika》1990,55(2):277-291
Based on the usual factor analysis model, this paper investigates the relationship between improper solutions and the number of factors, and discusses the properties of the noniterative estimation method of Ihara and Kano in exploratory factor analysis. The consistency of the Ihara and Kano estimator is shown to hold even for an overestimated number of factors, which provides a theoretical basis for the rare occurrence of improper solutions and for a new method of choosing the number of factors. The comparative study of their estimator and that based on maximum likelihood is carried out by a Monte Carlo experiment.The author would like to express his thanks to Masashi Okamoto and Masamori Ihara for helpful comments and to the editor and referees for critically reading the earlier versions and making many valuable suggestions. He also thanks Shigeo Aki for his comments on physical random numbers.  相似文献   
54.
Given known item parameters, unbiased estimators are derived i) for an examinee's ability parameter and for his proportion-correct true score, ii) for the variances of and across examinees in the group tested, and iii) for the parallel-forms reliability of the maximum likelihood estimator .This work was supported in part by contract N00014-80-C-0402, project designation NR 150-453 between the Office of Naval Research and Educational Testing Service. Reproduction in whole or in part is permitted for any purpose of the United States Government.  相似文献   
55.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.This work was partially supported by the Program Statistics Research Project at Educational Testing Service.  相似文献   
56.
Probabilistic multidimensional scaling: Complete and incomplete data   总被引:1,自引:0,他引:1  
Simple procedures are described for obtaining maximum likelihood estimates of the location and uncertainty parameters of the Hefner model. This model is a probabilistic, multidimensional scaling model, which assigns a multivariate normal distribution to each stimulus point. It is shown that for such a model, standard nonmetric and metric algorithms are not appropriate. A procedure is also described for constructing incomplete data sets, by taking into consideration the degree of familiarity the subject has for each stimulus. Maximum likelihood estimates are developed both for complete and incomplete data sets. This research was supported by National Science Grant No. SOC76-20517. The first author would especially like to express his gratitude to the Netherlands Institute for Advanced Study for its very substantial help with this research.  相似文献   
57.
Using the theory of pseudo maximum likelihood estimation the asymptotic covariance matrix of maximum likelihood estimates for mean and covariance structure models is given for the case where the variables are not multivariate normal. This asymptotic covariance matrix is consistently estimated without the computation of the empirical fourth order moment matrix. Using quasi-maximum likelihood theory a Hausman misspecification test is developed. This test is sensitive to misspecification caused by errors that are correlated with the independent variables. This misspecification cannot be detected by the test statistics currently used in covariance structure analysis.For helpful comments on a previous draft of the paper we are indebted to Kenneth A. Bollen, Ulrich L. Küsters, Michael E. Sobel and the anonymous reviewers of Psychometrika. For partial research support, the first author wishes to thank the Department of Sociology at the University of Arizona, where he was a visiting professor during the fall semester 1987.  相似文献   
58.
Power of the likelihood ratio test in covariance structure analysis   总被引:4,自引:0,他引:4  
A procedure for computing the power of the likelihood ratio test used in the context of covariance structure analysis is derived. The procedure uses statistics associated with the standard output of the computer programs commonly used and assumes that a specific alternative value of the parameter vector is specified. Using the noncentral Chi-square distribution, the power of the test is approximated by the asymptotic one for a sequence of local alternatives. The procedure is illustrated by an example. A Monte Carlo experiment also shows how good the approximation is for a specific case.This research was made possible by a grant from the Dutch Organization for Advancement of Pure Research (ZWO). The authors also like to acknowledge the helpful comments and suggestions from the editor and anonymous reviewers.  相似文献   
59.
Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.  相似文献   
60.
The authors introduce subset conjunction as a classification rule by which an acceptable alternative must satisfy some minimum number of criteria. The rule subsumes conjunctive and disjunctive decision strategies as special cases. Subset conjunction can be represented in a binary-response model, for example, in a logistic regression, using only main effects or only interaction effects. This results in a confounding of the main and interaction effects when there is little or no response error. With greater response error, a logistic regression, even if it gives a good fit to data, can produce parameter estimates that do not reflect the underlying decision process. The authors propose a model in which the binary classification of alternatives into acceptable/unacceptable categories is based on a probabilistic implementation of a subset-conjunctive process. The satisfaction of decision criteria biases the odds toward one outcome or the other. The authors then describe a two-stage choice model in which a (possibly large) set of alternatives is first reduced using a subset-conjunctive rule, after which an alternative is selected from this reduced set of items. They describe methods for estimating the unobserved consideration probabilities from classification and choice data, and illustrate the use of the models for cancer diagnosis and consumer choice. They report the results of simulations investigating estimation accuracy, incidence of local optima, and model fit. The authors thank the Editor, the Associate Editor, and three anonymous reviewers for their constructive suggestions, and also thank Asim Ansari and Raghuram Iyengar for their helpful comments. They also thank Sawtooth Software, McKinsey and Company, and Intelliquest for providing the PC choice data, and the University of Wisconsin for making the breast-cancer data available at the machine learning archives.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号