首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers a multivariate normal model with one of the component variables observable only in polytomous form. The maximum likelihood approach is used for estimation of the parameters in the model. The Newton-Raphson algorithm is implemented to obtain the solution of the problem. Examples based on real and simulated data are reported.The research of the first author was supported in part by a research grant (DA01070) from the US Public Health Service. We are indebted to the referees and the editor for some very valuable comments and suggestions.  相似文献   

2.
The likelihood for generalized linear models with covariate measurement error cannot in general be expressed in closed form, which makes maximum likelihood estimation taxing. A popular alternative is regression calibration which is computationally efficient at the cost of inconsistent estimation. We propose an improved regression calibration approach, a general pseudo maximum likelihood estimation method based on a conveniently decomposed form of the likelihood. It is both consistent and computationally efficient, and produces point estimates and estimated standard errors which are practically identical to those obtained by maximum likelihood. Simulations suggest that improved regression calibration, which is easy to implement in standard software, works well in a range of situations.  相似文献   

3.
A two-stage procedure is developed for analyzing structural equation models with continuous and polytomous variables. At the first stage, the maximum likelihood estimates of the thresholds, polychoric covariances and variances, and polyserial covariances are simultaneously obtained with the help of an appropriate transformation that significantly simplifies the computation. An asymptotic covariance matrix of the estiates is also computed. At the second stage, the parameters in the structural covariance model are obtained via the generalized least squares approach. Basic statistical properties of the estimates are derived and some illustrative examples and a small simulation study are reported.This research was supported in part by a research grant DA01070 from the U. S. Public Health Service. We are indebted to several referees and the editor for very valuable comments and suggestions for improvement of this paper. The computing assistance of King-Hong Leung and Man-Lai Tang is also gratefully acknowledged.  相似文献   

4.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

5.
This paper is concerned with the analysis of structural equation models with polytomous variables. A computationally efficient three-stage estimator of the thresholds and the covariance structure parameters, based on partition maximum likelihood and generalized least squares estimation, is proposed. An example is presented to illustrate the method.This research was supported in part by a research grant DA01070 from the U.S. Public Health Service. The production assistance of Julie Speckart is gratefully acknowledged.  相似文献   

6.
The polytomous unidimensional Rasch model with equidistant scoring, also known as the rating scale model, is extended in such a way that the item parameters are linearly decomposed into certain basic parameters. The extended model is denoted as the linear rating scale model (LRSM). A conditional maximum likelihood estimation procedure and a likelihood-ratio test of hypotheses within the framework of the LRSM are presented. Since the LRSM is a generalization of both the dichotomous Rasch model and the rating scale model, the present algorithm is suited for conditional maximum likelihood estimation in these submodels as well. The practicality of the conditional method is demonstrated by means of a dichotomous Rasch example with 100 items, of a rating scale example with 30 items and 5 categories, and in the light of an empirical application to the measurement of treatment effects in a clinical study.Work supported in part by the Fonds zur Förderung der Wissenschaftlichen Forschung under Grant No. P6414.  相似文献   

7.
Lord and Wingersky have developed a method for computing the asymptotic variance-covariance matrix of maximum likelihood estimates for item and person parameters under some restrictions on the estimates which are needed in order to fix the latent scale. The method is tedious, but can be simplified for the Rasch model when one is only interested in the item parameters. This is demonstrated here under a suitable restriction on the item parameter estimates.  相似文献   

8.
Group-level variance estimates of zero often arise when fitting multilevel or hierarchical linear models, especially when the number of groups is small. For situations where zero variances are implausible a priori, we propose a maximum penalized likelihood approach to avoid such boundary estimates. This approach is equivalent to estimating variance parameters by their posterior mode, given a weakly informative prior distribution. By choosing the penalty from the log-gamma family with shape parameter greater than 1, we ensure that the estimated variance will be positive. We suggest a default log-gamma(2,λ) penalty with λ→0, which ensures that the maximum penalized likelihood estimate is approximately one standard error from zero when the maximum likelihood estimate is zero, thus remaining consistent with the data while being nondegenerate. We also show that the maximum penalized likelihood estimator with this default penalty is a good approximation to the posterior median obtained under a noninformative prior. Our default method provides better estimates of model parameters and standard errors than the maximum likelihood or the restricted maximum likelihood estimators. The log-gamma family can also be used to convey substantive prior information. In either case—pure penalization or prior information—our recommended procedure gives nondegenerate estimates and in the limit coincides with maximum likelihood as the number of groups increases.  相似文献   

9.
The standard tobit or censored regression model is typically utilized for regression analysis when the dependent variable is censored. This model is generalized by developing a conditional mixture, maximum likelihood method for latent class censored regression. The proposed method simultaneously estimates separate regression functions and subject membership in K latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The proposed method is illustrated via a consumer psychology application.  相似文献   

10.
Rasch models are characterised by sufficient statistics for all parameters. In the Rasch unidimensional model for two ordered categories, the parameterisation of the person and item is symmetrical and it is readily established that the total scores of a person and item are sufficient statistics for their respective parameters. In contrast, in the unidimensional polytomous Rasch model for more than two ordered categories, the parameterisation is not symmetrical. Specifically, each item has a vector of item parameters, one for each category, and each person only one person parameter. In addition, different items can have different numbers of categories and, therefore, different numbers of parameters. The sufficient statistic for the parameters of an item is itself a vector. In estimating the person parameters in presently available software, these sufficient statistics are not used to condition out the item parameters. This paper derives a conditional, pairwise, pseudo-likelihood and constructs estimates of the parameters of any number of persons which are independent of all item parameters and of the maximum scores of all items. It also shows that these estimates are consistent. Although Rasch’s original work began with equating tests using test scores, and not with items of a test, the polytomous Rasch model has not been applied in this way. Operationally, this is because the current approaches, in which item parameters are estimated first, cannot handle test data where there may be many scores with zero frequencies. A small simulation study shows that, when using the estimation equations derived in this paper, such a property of the data is no impediment to the application of the model at the level of tests. This opens up the possibility of using the polytomous Rasch model directly in equating test scores.  相似文献   

11.
Martin-Löf  P. 《Synthese》1977,36(2):195-206
This paper proposes a uniform method for constructing tests, confidence regions and point estimates which is called exact since it reduces to Fisher's so-called exact test in the case of the hypothesis of independence in a 2 × 2 contingency table. All the wellknown standard tests based on exact sampling distributions are instances of the exact test in its general form. The likelihood ratio and x2 tests as well as the maximum likelihood estimate appears as asymptotic approximations to the corresponding exact procedures.  相似文献   

12.
A direct method in handling incomplete data in general covariance structural models is investigated. Asymptotic statistical properties of the generalized least squares method are developed. It is shown that this approach has very close relationships with the maximum likelihood approach. Iterative procedures for obtaining the generalized least squares estimates, the maximum likelihood estimates, as well as their standard error estimates are derived. Computer programs for the confirmatory factor analysis model are implemented. A longitudinal type data set is used as an example to illustrate the results.This research was supported in part by Research Grant DAD1070 from the U.S. Public Health Service. The author is indebted to anonymous reviewers for some very valuable suggestions. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   

13.
A central assumption that is implicit in estimating item parameters in item response theory (IRT) models is the normality of the latent trait distribution, whereas a similar assumption made in categorical confirmatory factor analysis (CCFA) models is the multivariate normality of the latent response variables. Violation of the normality assumption can lead to biased parameter estimates. Although previous studies have focused primarily on unidimensional IRT models, this study extended the literature by considering a multidimensional IRT model for polytomous responses, namely the multidimensional graded response model. Moreover, this study is one of few studies that specifically compared the performance of full-information maximum likelihood (FIML) estimation versus robust weighted least squares (WLS) estimation when the normality assumption is violated. The research also manipulated the number of nonnormal latent trait dimensions. Results showed that FIML consistently outperformed WLS when there were one or multiple skewed latent trait distributions. More interestingly, the bias of the discrimination parameters was non-ignorable only when the corresponding factor was skewed. Having other skewed factors did not further exacerbate the bias, whereas biases of boundary parameters increased as more nonnormal factors were added. The item parameter standard errors recovered well with both estimation algorithms regardless of the number of nonnormal dimensions.  相似文献   

14.
本文对多级计分认知诊断测验的DIF概念进行了界定,并通过模拟实验以及实证研究对四种常见的多级计分DIF检验方法的适用性进行理论以及实践性的探索。研究结果表明:四种方法均能对多级计分认知诊断中的DIF进行有效的检验,且各方法的表现受模型的影响不大;相较于以总分为匹配变量,以KS为匹配变量时更利于DIF的检测;以KS为匹配变量的LDFA方法以及以KS为匹配变量的曼特尔检验方法在检测DIF题目时有着最高的检验力。  相似文献   

15.
多分属性认知诊断模型(CDMs)比传统的二分属性CDMs提供更详细的诊断反馈信息,但现有大部分多分属性CDMs并不具备直接分析多级(或混合)评分数据的功能。本文基于等级反应模型对重参数化多分属性DINA模型进行多级评分拓广,开发一个可处理多级评分数据的等级反应多分属性DINA模型。首先通过实证数据分析呈现新模型的现实可应用性;然后通过模拟研究探究新模型的参数估计返真性。结果表明,新模型满足同时处理多分属性和多级评分数据的现实需求;且具备良好的心理计量学性能,但对测验质量有一定要求(如题目质量较高且测验Qp矩阵具有完备性等)。  相似文献   

16.
The large sample distribution of total indirect effects in covariance structure models in well known. Using Monte Carlo methods, this study examines the applicability of the large sample theory to maximum likelihood estimates oftotal indirect effects in sample sizes of 50, 100, 200, 400, and 800. Two models are studied. Model 1 is a recursive model with observable variables and Model 2 is a nonrecursive model with latent variables. For the large sample theory to apply, the results suggest that sample szes of 200 or more and 400 or more are required for models such as Model 1 and Model 2, respectively.For helpful comments on a previous draft of this paper, we are grateful to Gerhard Arminger, Clifford C. Clogg, and several anonymous reviewers.  相似文献   

17.
认知诊断计算机化自适应测验(Cognitive Diagnosis Computerized Adaptive Testing, CD-CAT)是认知诊断评估和计算机化自适应测验两者的结合,兼具认知诊断和自适应测验的特点。目前,针对CD-CAT的研究几乎都集中在0-1二级计分的数据。然而,在教育和心理评估的实际应用中,存在大量的多级计分的数据。因此,本研究探讨了多级计分CD-CAT(Polytomous CD-CAT, PCD-CAT)的实现技术,并提出了2种新的选题方法。通过模拟实验比较了新选题方法和传统选题方法在PCD-CAT的效果,结果表明:在定长PCD-CAT条件下,2种新选题方法的模式分类准确率是最高的,而在非定长PCD-CAT条件下,2种新方法的测验效率也是最高的。  相似文献   

18.
For mixed models generally, it is well known that modeling data with few clusters will result in biased estimates, particularly of the variance components and fixed effect standard errors. In linear mixed models, small sample bias is typically addressed through restricted maximum likelihood estimation (REML) and a Kenward-Roger correction. Yet with binary outcomes, there is no direct analog of either procedure. With a larger number of clusters, estimation methods for binary outcomes that approximate the likelihood to circumvent the lack of a closed form solution such as adaptive Gaussian quadrature and the Laplace approximation have been shown to yield less-biased estimates than linearization estimation methods that instead linearly approximate the model. However, adaptive Gaussian quadrature and the Laplace approximation are approximating the full likelihood rather than the restricted likelihood; the full likelihood is known to yield biased estimates with few clusters. On the other hand, linearization methods linearly approximate the model, which allows for restricted maximum likelihood and the Kenward-Roger correction to be applied. Thus, the following question arises: Which is preferable, a better approximation of a biased function or a worse approximation of an unbiased function? We address this question with a simulation and an illustrative empirical analysis.  相似文献   

19.
Ogilvie and Creelman have recently attempted to develop maximum likelihood estimates of the parameters of signal-detection theory from the data of yes-no ROC curves. Their method involved the assumption of a logistic distribution rather than the normal distribution in order to make the mathematics more tractable. The present paper presents a method of obtaining maximum likelihood estimates of these parameters using the assumption of underlying normal distributions.This research was supported in part by grants from the National Institutes of Health, MH-10449-02, and from the National Science Foundation, NSF GS-1466.  相似文献   

20.
Three methods for fitting the diffusion model (Ratcliff, 1978) to experimental data are examined. Sets of simulated data were generated with known parameter values, and from fits of the model, we found that the maximum likelihood method was better than the chi-square and weighted least squares methods by criteria of bias in the parameters relative to the parameter values used to generate the data and standard deviations in the parameter estimates. The standard deviations in the parameter values can be used as measures of the variability in parameter estimates from fits to experimental data. We introduced contaminant reaction times and variability into the other components of processing besides the decision process and found that the maximum likelihood and chi-square methods failed, sometimes dramatically. But the weighted least squares method was robust to these two factors. We then present results from modifications of the maximum likelihood and chi-square methods, in which these factors are explicitly modeled, and show that the parameter values of the diffusion model are recovered well. We argue that explicit modeling is an important method for addressing contaminants and variability in nondecision processes and that it can be applied in any theoretical approach to modeling reaction time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号