首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and diagnosis of individuals/participants. But regression analysis with weighted composites has been known to yield attenuated regression coefficients when predictors contain errors. Contrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the signal-to-noise ratio (SNR). In particular, the SNR for the regression coefficient via the least squares (LS) method with equally weighted composites is mathematically greater than that by CB-SEM if the items for each factor are parallel, even when the SEM model is correctly specified and estimated by an efficient method. Analytical, numerical and empirical results also show that LS regression using weighted composites performs as well as or better than the normal maximum likelihood method for CB-SEM under many conditions even when the population distribution is multivariate normal. Results also show that the LS regression coefficients become more efficient when considering the sampling errors in the weights of composites than those that are conditional on weights.  相似文献   

2.
Several algorithms for covariance structure analysis are considered in addition to the Fletcher-Powell algorithm. These include the Gauss-Newton, Newton-Raphson, Fisher Scoring, and Fletcher-Reeves algorithms. Two methods of estimation are considered, maximum likelihood and weighted least squares. It is shown that the Gauss-Newton algorithm which in standard form produces weighted least squares estimates can, in iteratively reweighted form, produce maximum likelihood estimates as well. Previously unavailable standard error estimates to be used in conjunction with the Fletcher-Reeves algorithm are derived. Finally all the algorithms are applied to a number of maximum likelihood and weighted least squares factor analysis problems to compare the estimates and the standard errors produced. The algorithms appear to give satisfactory estimates but there are serious discrepancies in the standard errors. Because it is robust to poor starting values, converges rapidly and conveniently produces consistent standard errors for both maximum likelihood and weighted least squares problems, the Gauss-Newton algorithm represents an attractive alternative for at least some covariance structure analyses.Work by the first author has been supported in part by Grant No. Da01070 from the U. S. Public Health Service. Work by the second author has been supported in part by Grant No. MCS 77-02121 from the National Science Foundation.  相似文献   

3.
This paper examines the implications of violating assumptions concerning the continuity and distributional properties of data in establishing measurement models in social science research. The General Health Questionnaire-12 uses an ordinal response scale. Responses to the GHQ-12 from 201 Hong Kong immigrants on arrival in Australia showed that the data were not normally distributed. A series of confirmatory factor analyses using either a Pearson product-moment or a polychoric correlation input matrix and employing either maximum likelihood, weighted least squares or diagonally weighted least squares estimation methods were conducted on the data. The parameter estimates and goodness-of-fit statistics provided support for using polychoric correlations and diagonally weighted least squares estimation when analyzing ordinal, nonnormal data.  相似文献   

4.
The diffusion model (Ratcliff, 1978) for fast two-choice decisions has been successful in a number of domains. Wagenmakers, van der Maas, and Grasman (2007) proposed a new method for fitting the model to data (“EZ”) that is simpler than the standard chisquare method (Ratcliff & Tuerlinckx, 2002). For an experimental condition, EZ can estimate parameter values for the main components of processing using only correct response times (RTs), their variance, and accuracy, not error RTs or the shapes of RT distributions. Wagenmakers et al. suggested that EZ produces accurate parameter estimates in cases in which the chi-square method would fail-specifically, experimental conditions with small numbers of observations or with accuracy near ceiling. In this article, I counter these claims and discuss EZ’s limitations. Unlike the chi-square method, EZ is extremely sensitive to outlier RTs and is usually less efficient in recovering parameter values, and it can lead to errors in interpretation when the data do not meet its assumptions, when the number of observations in an experimental condition is small, or when accuracy in an experimental condition is high. The conclusion is that EZ can be useful in the exploration of parameter spaces, but it should not be used for meaningful estimates of parameter values or for assessing whether or not a model fits data.  相似文献   

5.
In a recent article published in this journal, Yuan and Fang (British Journal of Mathematical and Statistical Psychology, 2023) suggest comparing structural equation modeling (SEM), also known as covariance-based SEM (CB-SEM), estimated by normal-distribution-based maximum likelihood (NML), to regression analysis with (weighted) composites estimated by least squares (LS) in terms of their signal-to-noise ratio (SNR). They summarize their findings in the statement that “[c]ontrary to the common belief that CB-SEM is the preferred method for the analysis of observational data, this article shows that regression analysis via weighted composites yields parameter estimates with much smaller standard errors, and thus corresponds to greater values of the [SNR].” In our commentary, we show that Yuan and Fang have made several incorrect assumptions and claims. Consequently, we recommend that empirical researchers not base their methodological choice regarding CB-SEM and regression analysis with composites on the findings of Yuan and Fang as these findings are premature and require further research.  相似文献   

6.
Among the most valuable tools in behavioral science is statistically fitting mathematical models of cognition to data—response time distributions, in particular. However, techniques for fitting distributions vary widely, and little is known about the efficacy of different techniques. In this article, we assess several fitting techniques by simulating six widely cited models of response time and using the fitting procedures to recover model parameters. The techniques include the maximization of likelihood and least squares fits of the theoretical distributions to different empirical estimates of the simulated distributions. A running example is used to illustrate the different estimation and fitting procedures. The simulation studies reveal that empirical density estimates are biased even for very large sample sizes. Some fitting techniques yield more accurate and less variable parameter estimates than do others. Methods that involve least squares fits to density estimates generally yield very poor parameter estimates.  相似文献   

7.
Traditional structural equation modeling (SEM) techniques have trouble dealing with incomplete and/or nonnormal data that are often encountered in practice. Yuan and Zhang (2011a) developed a two-stage procedure for SEM to handle nonnormal missing data and proposed four test statistics for overall model evaluation. Although these statistics have been shown to work well with complete data, their performance for incomplete data has not been investigated in the context of robust statistics.

Focusing on a linear growth curve model, a systematic simulation study is conducted to evaluate the accuracy of the parameter estimates and the performance of five test statistics including the naive statistic derived from normal distribution based maximum likelihood (ML), the Satorra-Bentler scaled chi-square statistic (RML), the mean- and variance-adjusted chi-square statistic (AML), Yuan-Bentler residual-based test statistic (CRADF), and Yuan-Bentler residual-based F statistic (RF). Data are generated and analyzed in R using the package rsem (Yuan & Zhang, 2011b).

Based on the simulation study, we can observe the following: (a) The traditional normal distribution-based method cannot yield accurate parameter estimates for nonnormal data, whereas the robust method obtains much more accurate model parameter estimates for nonnormal data and performs almost as well as the normal distribution based method for normal distributed data. (b) With the increase of sample size, or the decrease of missing rate or the number of outliers, the parameter estimates are less biased and the empirical distributions of test statistics are closer to their nominal distributions. (c) The ML test statistic does not work well for nonnormal or missing data. (d) For nonnormal complete data, CRADF and RF work relatively better than RML and AML. (e) For missing completely at random (MCAR) missing data, in almost all the cases, RML and AML work better than CRADF and RF. (f) For nonnormal missing at random (MAR) missing data, CRADF and RF work better than AML. (g) The performance of the robust method does not seem to be influenced by the symmetry of outliers.  相似文献   

8.
We demonstrate some procedures in the statistical computing environment R for obtaining maximum likelihood estimates of the parameters of a psychometric function by fitting a generalized nonlinear regression model to the data. A feature for fitting a linear model to the threshold (or other) parameters of several psychometric functions simultaneously provides a powerful tool for testing hypotheses about the data and, potentially, for reducing the number of parameters necessary to describe them. Finally, we illustrate procedures for treating one parameter as a random effect that would permit a simplified approach to modeling stimulus-independent variability due to factors such as lapses or interobserver differences. These tools will facilitate a more comprehensive and explicit approach to the modeling of psychometric data.  相似文献   

9.
刘红云  骆方  王玥  张玉 《心理学报》2012,44(1):121-132
作者简要回顾了SEM框架下分类数据因素分析(CCFA)模型和MIRT框架下测验题目和潜在能力的关系模型, 对两种框架下的主要参数估计方法进行了总结。通过模拟研究, 比较了SEM框架下WLSc和WLSMV估计方法与MIRT框架下MLR和MCMC估计方法的差异。研究结果表明:(1) WLSc得到参数估计的偏差最大, 且存在参数收敛的问题; (2)随着样本量增大, 各种项目参数估计的精度均提高, WLSMV方法与MLR方法得到的参数估计精度差异很小, 大多数情况下不比MCMC方法差; (3)除WLSc方法外, 随着每个维度测验题目的增多参数估计的精度逐渐增高; (4)测验维度对区分度参数和难度参数的影响较大, 而测验维度对项目因素载荷和阈值的影响相对较小; (5)项目参数的估计精度受项目测量维度数的影响, 只测量一个维度的项目参数估计精度较高。另外文章还对两种方法在实际应用中应该注意的问题提供了一些建议。  相似文献   

10.
The three-parameter logistic model is widely used to model the responses to a proficiency test when the examinees can guess the correct response, as is the case for multiple-choice items. However, the weak identifiability of the parameters of the model results in large variability of the estimates and in convergence difficulties in the numerical maximization of the likelihood function. To overcome these issues, in this paper we explore various shrinkage estimation methods, following two main approaches. First, a ridge-type penalty on the guessing parameters is introduced in the likelihood function. The tuning parameter is then selected through various approaches: cross-validation, information criteria or using an empirical Bayes method. The second approach explored is based on the methodology developed to reduce the bias of the maximum likelihood estimator through an adjusted score equation. The performance of the methods is investigated through simulation studies and a real data example.  相似文献   

11.
The maximum likelihood estimation (MLE) method is the most commonly used method to estimate the parameters of the three‐parameter Weibull distribution. However, it returns biased estimates. In this paper, we show how to calculate weights which cancel the biases contained in the MLE equations. The exact weights can be computed when the population parameters are known and the expected weights when they are not. Two of the three weights' expected values are dependent only on the sample size, whereas the third also depends on the population shape parameters. Monte Carlo simulations demonstrate the practicability of the weighted MLE method. When compared with the iterative MLE technique, the bias is reduced by a factor of 7 (irrespective of the sample size) and the variability of the parameter estimates is also reduced by a factor of 7 for very small sample sizes, but this gain disappears for large sample sizes.  相似文献   

12.
Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified structure of the model, which is defined by the number of classes and, possibly, fixation and equality constraints. The model structure is usually chosen on theoretical grounds. A large variety of structurally different latent class models can be compared using goodness-of-fit indices of the chi-square family, Akaike’s information criterion, the Bayesian information criterion, and various other statistics. However, finding the optimal structure for a given goodness-of-fit index often requires a lengthy search in which all kinds of model structures are tested. Moreover, solutions may depend on the choice of initial values for the parameters. This article presents a new method by which one can simultaneously infer the model structure from the data and optimize the parameter values. The method consists of a genetic algorithm in which any goodness-of-fit index can be used as a fitness criterion. In a number of test cases in which data sets from the literature were used, it is shown that this method provides models that fit equally well as or better than the models suggested in the original articles.  相似文献   

13.
梁莘娅  杨艳云 《心理科学》2016,39(5):1256-1267
结构方程模型已被广泛应用于心理学、教育学、以及社会科学领域的统计分析中。结构方程模型分析中最常用的估计方法是基于正 态分布的估计量,比如极大似然估计法。这些方法需要满足两个假设。第一, 理论模型必须正确地反映变量与变量之间的关系,称为结构假 设。第二,数据必须符合多元正态分布,称为分布假设。如果这些假设不满足,基于正态分布的估计量就有可能导致不正确的卡方指数、不 正确的拟合度、以及有偏差的参数估计和参数估计的标准误。在实际应用中,几乎所有的理论模型都不能准确地解释变量与变量之间的关系, 数据也常常呈非多元正态分布。为此,一些新的估计方法得以发展。这些方法要么在理论上不要求数据呈多元正态分布,要么对因数据呈非 正态分布而导致的不正确结果进行纠正。当前较为流行的两种方法是稳健极大似然估计和贝叶斯估计。稳健极大似然估计是应用 Satorra and Bentler (1994) 的方法对不正确的卡方指数和参数估计的标准误进行调整,而参数估计和用极大似然方法得出的完全等同。贝叶斯估计方法则是 基于贝叶斯定理,其要点是:参数的后验分布是由参数的先验分布和数据似然值相乘而得来。后验分布常用马尔科夫蒙特卡洛算法来进行模拟。 对于稳健极大似然估计和贝叶斯估计这两种方法之间的优劣比较,先前的研究只局限于理论模型是正确的情境。而本研究则着重于理论模型 是错误的情境,同时也考虑到数据呈非正态分布的情境。本研究所采用的模型是验证性因子模型,数据全部由计算机模拟而来。数据的生成 取决于三个因素:8 类因子结构,3 种变量分布,和3 组样本量。这三个因素产生72 个模拟条件(72=8x3x3)。每个模拟条件下生成2000 个 数据组,每个数据组都拟合两个模型,一个是正确模型、一个是错误模型。每个模型都用两种估计方法来拟合:稳健极大似然估计法和贝叶 斯估计方法。贝叶斯估计方法中所使用的先验分布是无信息先验分布。结果分析主要着重于模型拒绝率、拟合度、参数估计、和参数估计的 标准误。研究的结果表明:在样本量充足的情况下,两种方法得出的参数估计非常相似。当数据呈非正态分布时,贝叶斯估计法比稳健极大 似然估计法更好地拒绝错误模型。但是,当样本量不足且数据呈正态分布时,贝叶斯估计在拒绝错误模型和参数估计上几乎没有优势,甚至 在一些条件下,比稳健极大似然法要差。  相似文献   

14.
15.
Hierarchical (or multilevel) statistical models have become increasingly popular in psychology in the last few years. In this article, we consider the application of multilevel modeling to the ex-Gaussian, a popular model of response times. We compare single-level and hierarchical methods for estimation of the parameters of ex-Gaussian distributions. In addition, for each approach, we compare maximum likelihood estimation with Bayesian estimation. A set of simulations and analyses of parameter recovery show that although all methods perform adequately well, hierarchical methods are better able to recover the parameters of the ex-Gaussian, by reducing variability in the recovered parameters. At each level, little overall difference was observed between the maximum likelihood and Bayesian methods.  相似文献   

16.
Regularization, or shrinkage estimation, refers to a class of statistical methods that constrain the variability of parameter estimates when fitting models to data. These constraints move parameters toward a group mean or toward a fixed point (e.g., 0). Regularization has gained popularity across many fields for its ability to increase predictive power over classical techniques. However, articles published in JEAB and other behavioral journals have yet to adopt these methods. This paper reviews some common regularization schemes and speculates as to why articles published in JEAB do not use them. In response, we propose our own shrinkage estimator that avoids some of the possible objections associated with the reviewed regularization methods. Our estimator works by mixing weighted individual and group (WIG) data rather than by constraining parameters. We test this method on a problem of model selection. Specifically, we conduct a simulation study on the selection of matching‐law‐based punishment models, comparing WIG with ordinary least squares (OLS) regression, and find that, on average, WIG outperforms OLS in this context.  相似文献   

17.
To assess the effect of a manipulation on a response time distribution, psychologists often use Vincentizing or quantile averaging to construct group or “average” distributions. We provide a theorem characterizing the large sample properties of the averaged quantiles when the individual RT distributions all belong to the same location-scale family. We then apply the theorem to estimating parameters for the quantile-averaged distributions. From the theorem, it is shown that parameters of the group distribution can be estimated by generalized least squares. This method provides accurate estimates of standard errors of parameters and can therefore be used in formal inference. The method is benchmarked in a small simulation study against both a maximum likelihood method and an ordinary least-squares method. Generalized least squares essentially is the only method based on the averaged quantiles that is both unbiased and provides accurate estimates of parameter standard errors. It is also proved that for location-scale families, performing generalized least squares on quantile averages is formally equivalent to averaging parameter estimates from generalized least squares performed on individuals. A limitation on the method is that individual RT distributions must be members of the same location-scale family.  相似文献   

18.
Correlated multivariate ordinal data can be analysed with structural equation models. Parameter estimation has been tackled in the literature using limited-information methods including three-stage least squares and pseudo-likelihood estimation methods such as pairwise maximum likelihood estimation. In this paper, two likelihood ratio test statistics and their asymptotic distributions are derived for testing overall goodness-of-fit and nested models, respectively, under the estimation framework of pairwise maximum likelihood estimation. Simulation results show a satisfactory performance of type I error and power for the proposed test statistics and also suggest that the performance of the proposed test statistics is similar to that of the test statistics derived under the three-stage diagonally weighted and unweighted least squares. Furthermore, the corresponding, under the pairwise framework, model selection criteria, AIC and BIC, show satisfactory results in selecting the right model in our simulation examples. The derivation of the likelihood ratio test statistics and model selection criteria under the pairwise framework together with pairwise estimation provide a flexible framework for fitting and testing structural equation models for ordinal as well as for other types of data. The test statistics derived and the model selection criteria are used on data on ‘trust in the police’ selected from the 2010 European Social Survey. The proposed test statistics and the model selection criteria have been implemented in the R package lavaan.  相似文献   

19.
A direct method in handling incomplete data in general covariance structural models is investigated. Asymptotic statistical properties of the generalized least squares method are developed. It is shown that this approach has very close relationships with the maximum likelihood approach. Iterative procedures for obtaining the generalized least squares estimates, the maximum likelihood estimates, as well as their standard error estimates are derived. Computer programs for the confirmatory factor analysis model are implemented. A longitudinal type data set is used as an example to illustrate the results.This research was supported in part by Research Grant DAD1070 from the U.S. Public Health Service. The author is indebted to anonymous reviewers for some very valuable suggestions. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   

20.
无均值结构的潜变量交互效应模型的标准化估计   总被引:1,自引:0,他引:1  
吴艳  温忠麟  侯杰泰 《心理学报》2011,43(10):1219-1228
潜变量交互效应建模研究近年来有两项重要进展, 一是提出了潜变量交互效应模型的标准化估计及其计算公式; 二是发现无均值结构模型可以取代传统的有均值结构模型, 建模大为简化。但标准化估计是在传统的有均值结构模型中建立的, 在简化的模型中同样适用吗?本文在无均值结构模型的框架内, 给出了潜变量交互效应模型的标准化形式、计算公式和建模步骤, 并通过模拟研究比较了极大似然和广义最小二乘两种估计方法、配对乘积指标和全部乘积指标两种指标类型, 结果表明, 在计算交互效应的标准化估计时, 应当使用配对乘积指标建模, 并且首选极大似然估计。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号