首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Maximum likelihood estimation in the one‐factor model is based on the assumption of multivariate normality for the observed data. This general distributional assumption implies three specific assumptions for the parameters in the one‐factor model: the common factor has a normal distribution; the residuals are homoscedastic; and the factor loadings do not vary across the common factor scale. When any of these assumptions is violated, non‐normality arises in the observed data. In this paper, a model is presented based on marginal maximum likelihood to enable explicit tests of these assumptions. In addition, the model is suitable to incorporate the detected violations, to enable statistical modelling of these effects. Two simulation studies are reported in which the viability of the model is investigated. Finally, the model is applied to IQ data to demonstrate its practical utility as a means to investigate ability differentiation.  相似文献   

2.
A multi‐group factor model is suitable for data originating from different strata. However, it often requires a relatively large sample size to avoid numerical issues such as non‐convergence and non‐positive definite covariance matrices. An alternative is to pool data from different groups in which a single‐group factor model is fitted to the pooled data using maximum likelihood. In this paper, properties of pseudo‐maximum likelihood (PML) estimators for pooled data are studied. The pooled data are assumed to be normally distributed from a single group. The resulting asymptotic efficiency of the PML estimators of factor loadings is compared with that of the multi‐group maximum likelihood estimators. The effect of pooling is investigated through a two‐group factor model. The variances of factor loadings for the pooled data are underestimated under the normal theory when error variances in the smaller group are larger. Underestimation is due to dependence between the pooled factors and pooled error terms. Small‐sample properties of the PML estimators are also investigated using a Monte Carlo study.  相似文献   

3.
Relationships between the results of factor analysis and component analysis are derived when oblique factors have independent clusters with equal variances of unique factors. The factor loadings are analytically shown to be smaller than the corresponding component loadings while the factor correlations are shown to be greater than the corresponding component correlations. The condition for the inequality of the factor/component contributions is derived in the case with different variances for unique factors. Further, the asymptotic standard errors of parameter estimates are obtained for a simplified model with the assumption of multivariate normality, which shows that the component loading estimate is more stable than the corresponding factor loading estimate.  相似文献   

4.
A central assumption that is implicit in estimating item parameters in item response theory (IRT) models is the normality of the latent trait distribution, whereas a similar assumption made in categorical confirmatory factor analysis (CCFA) models is the multivariate normality of the latent response variables. Violation of the normality assumption can lead to biased parameter estimates. Although previous studies have focused primarily on unidimensional IRT models, this study extended the literature by considering a multidimensional IRT model for polytomous responses, namely the multidimensional graded response model. Moreover, this study is one of few studies that specifically compared the performance of full-information maximum likelihood (FIML) estimation versus robust weighted least squares (WLS) estimation when the normality assumption is violated. The research also manipulated the number of nonnormal latent trait dimensions. Results showed that FIML consistently outperformed WLS when there were one or multiple skewed latent trait distributions. More interestingly, the bias of the discrimination parameters was non-ignorable only when the corresponding factor was skewed. Having other skewed factors did not further exacerbate the bias, whereas biases of boundary parameters increased as more nonnormal factors were added. The item parameter standard errors recovered well with both estimation algorithms regardless of the number of nonnormal dimensions.  相似文献   

5.
In a meta-analysis, the unknown parameters are often estimated using maximum likelihood, and inferences are based on asymptotic theory. It is assumed that, conditional on study characteristics included in the model, the between-study distribution and the sampling distributions of the effect sizes are normal. In practice, however, samples are finite, and the normality assumption may be violated, possibly resulting in biased estimates and inappropriate standard errors. In this article, we propose two parametric and two nonparametric bootstrap methods that can be used to adjust the results of maximum likelihood estimation in meta-analysis and illustrate them with empirical data. A simulation study, with raw data drawn from normal distributions, reveals that the parametric bootstrap methods and one of the nonparametric methods are generally superior to the ordinary maximum likelihood approach but suffer from a bias/precision tradeoff. We recommend using one of these bootstrap methods, but without applying the bias correction.  相似文献   

6.
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found. Conditions under which monotonic relationships do not exist are also identified. Such functional relationships allow researchers to better understand the problem when significant factor loading estimates are expected but not obtained, and vice versa. What will affect the likelihood for Heywood cases (negative unique variance estimates) is also explicit through these relationships. Empirical findings in the literature are discussed using the obtained results.  相似文献   

7.
梁莘娅  杨艳云 《心理科学》2016,39(5):1256-1267
结构方程模型已被广泛应用于心理学、教育学、以及社会科学领域的统计分析中。结构方程模型分析中最常用的估计方法是基于正 态分布的估计量,比如极大似然估计法。这些方法需要满足两个假设。第一, 理论模型必须正确地反映变量与变量之间的关系,称为结构假 设。第二,数据必须符合多元正态分布,称为分布假设。如果这些假设不满足,基于正态分布的估计量就有可能导致不正确的卡方指数、不 正确的拟合度、以及有偏差的参数估计和参数估计的标准误。在实际应用中,几乎所有的理论模型都不能准确地解释变量与变量之间的关系, 数据也常常呈非多元正态分布。为此,一些新的估计方法得以发展。这些方法要么在理论上不要求数据呈多元正态分布,要么对因数据呈非 正态分布而导致的不正确结果进行纠正。当前较为流行的两种方法是稳健极大似然估计和贝叶斯估计。稳健极大似然估计是应用 Satorra and Bentler (1994) 的方法对不正确的卡方指数和参数估计的标准误进行调整,而参数估计和用极大似然方法得出的完全等同。贝叶斯估计方法则是 基于贝叶斯定理,其要点是:参数的后验分布是由参数的先验分布和数据似然值相乘而得来。后验分布常用马尔科夫蒙特卡洛算法来进行模拟。 对于稳健极大似然估计和贝叶斯估计这两种方法之间的优劣比较,先前的研究只局限于理论模型是正确的情境。而本研究则着重于理论模型 是错误的情境,同时也考虑到数据呈非正态分布的情境。本研究所采用的模型是验证性因子模型,数据全部由计算机模拟而来。数据的生成 取决于三个因素:8 类因子结构,3 种变量分布,和3 组样本量。这三个因素产生72 个模拟条件(72=8x3x3)。每个模拟条件下生成2000 个 数据组,每个数据组都拟合两个模型,一个是正确模型、一个是错误模型。每个模型都用两种估计方法来拟合:稳健极大似然估计法和贝叶 斯估计方法。贝叶斯估计方法中所使用的先验分布是无信息先验分布。结果分析主要着重于模型拒绝率、拟合度、参数估计、和参数估计的 标准误。研究的结果表明:在样本量充足的情况下,两种方法得出的参数估计非常相似。当数据呈非正态分布时,贝叶斯估计法比稳健极大 似然估计法更好地拒绝错误模型。但是,当样本量不足且数据呈正态分布时,贝叶斯估计在拒绝错误模型和参数估计上几乎没有优势,甚至 在一些条件下,比稳健极大似然法要差。  相似文献   

8.
黎光明  张敏强 《心理学报》2013,45(1):114-124
Bootstrap方法是一种有放回的再抽样方法, 可用于概化理论的方差分量及其变异量估计。用Monte Carlo技术模拟四种分布数据, 分别是正态分布、二项分布、多项分布和偏态分布数据。基于p×i设计, 探讨校正的Bootstrap方法相对于未校正的Bootstrap方法, 是否改善了概化理论估计四种模拟分布数据的方差分量及其变异量。结果表明:跨越四种分布数据, 从整体到局部, 不论是“点估计”还是“变异量”估计, 校正的Bootstrap方法都要优于未校正的Bootstrap方法, 校正的Bootstrap方法改善了概化理论方差分量及其变异量估计。  相似文献   

9.
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category thresholds. Results revealed that factor loadings and robust standard errors were generally most accurately estimated using cat-LS, especially with fewer than 5 categories; however, factor correlations and model fit were assessed equally well with ML. Cat-LS was found to be more sensitive to sample size and to violations of the assumption of normality of the underlying continuous variables. Normal theory ML was found to be more sensitive to asymmetric category thresholds and was especially biased when estimating large factor loadings. Accordingly, we recommend cat-LS for data sets containing variables with fewer than 5 categories and ML when there are 5 or more categories, sample size is small, and category thresholds are approximately symmetric. With 6-7 categories, results were similar across methods for many conditions; in these cases, either method is acceptable. (PsycINFO Database Record (c) 2012 APA, all rights reserved).  相似文献   

10.
追踪研究中缺失数据十分常见。本文通过Monte Carlo模拟研究,考察基于不同前提假设的Diggle-Kenward选择模型和ML方法对增长参数估计精度的差异,并考虑样本量、缺失比例、目标变量分布形态以及不同缺失机制的影响。结果表明:(1)缺失机制对基于MAR的ML方法有较大的影响,在MNAR缺失机制下,基于MAR的ML方法对LGM模型中截距均值和斜率均值的估计不具有稳健性。(2)DiggleKenward选择模型更容易受到目标变量分布偏态程度的影响,样本量与偏态程度存在交互作用,样本量较大时,偏态程度的影响会减弱。而ML方法仅在MNAR机制下轻微受到偏态程度的影响。  相似文献   

11.
12.
A method of estimating item response theory (IRT) equating coefficients by the common-examinee design with the assumption of the two-parameter logistic model is provided. The method uses the marginal maximum likelihood estimation, in which individual ability parameters in a common-examinee group are numerically integrated out. The abilities of the common examinees are assumed to follow a normal distribution but with an unknown mean and standard deviation on one of the two tests to be equated. The distribution parameters are jointly estimated with the equating coefficients. Further, the asymptotic standard errors of the estimates of the equating coefficients and the parameters for the ability distribution are given. Numerical examples are provided to show the accuracy of the method.  相似文献   

13.
Sampling variability of the estimates of factor loadings is neglected in modern factor analysis. Such investigations are generally normal theory based and asymptotic in nature. The bootstrap, a computer-based methodology, is described and then applied to demonstrate how the sampling variability of the estimates of factor loadings can be estimated for a given set of data. The issue of the number of factors to be retained in a factor model is also addressed. The bootstrap is shown to be an effective data-analytic tool for computing various statistics of interest which are otherwise intractable.  相似文献   

14.
In the present paper, a general class of heteroscedastic one‐factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one‐dimensional factor score. A marginal maximum likelihood procedure for parameter estimation is proposed under both the assumption of multivariate normality of the observed scores conditional on the single common factor score and the assumption of normality of the common factor score. A likelihood ratio test is derived, which can be used to test the usual homoscedastic one‐factor model against one of the proposed heteroscedastic models. Simulation studies are carried out to investigate the robustness and the power of this likelihood ratio test. Results show that the asymptotic properties of the test statistic hold under both small test length conditions and small sample size conditions. Results also show under what conditions the power to detect different heteroscedasticity parameter values is either small, medium, or large. Finally, for illustrative purposes, the marginal maximum likelihood estimation procedure and the likelihood ratio test are applied to real data.  相似文献   

15.
The data obtained from one‐way independent groups designs is typically non‐normal in form and rarely equally variable across treatment populations (i.e. population variances are heterogeneous). Consequently, the classical test statistic that is used to assess statistical significance (i.e. the analysis of variance F test) typically provides invalid results (e.g. too many Type I errors, reduced power). For this reason, there has been considerable interest in finding a test statistic that is appropriate under conditions of non‐normality and variance heterogeneity. Previously recommended procedures for analysing such data include the James test, the Welch test applied either to the usual least squares estimators of central tendency and variability, or the Welch test with robust estimators (i.e. trimmed means and Winsorized variances). A new statistic proposed by Krishnamoorthy, Lu, and Mathew, intended to deal with heterogeneous variances, though not non‐normality, uses a parametric bootstrap procedure. In their investigation of the parametric bootstrap test, the authors examined its operating characteristics under limited conditions and did not compare it to the Welch test based on robust estimators. Thus, we investigated how the parametric bootstrap procedure and a modified parametric bootstrap procedure based on trimmed means perform relative to previously recommended procedures when data are non‐normal and heterogeneous. The results indicated that the tests based on trimmed means offer the best Type I error control and power when variances are unequal and at least some of the distribution shapes are non‐normal.  相似文献   

16.
The nontruncated marginal of a truncated bivariate normal distribution   总被引:7,自引:0,他引:7  
Inference is considered for the marginal distribution ofX, when (X, Y) has a truncated bivariate normal distribution. TheY variable is truncated, but only theX values are observed. The relationship of this distribution to Azzalini's skew-normal distribution is obtained. Method of moments and maximum likelihood estimation are compared for the three-parameter Azzalini distribution. Samples that are uniformative about the skewness of this distribution may occur, even for largen. Profile likelihood methods are employed to describe the uncertainty involved in parameter estimation. A sample of 87 Otis test scores is shown to be well-described by this model.  相似文献   

17.
Applications of item response theory, which depend upon its parameter invariance property, require that parameter estimates be unbiased. A new method, weighted likelihood estimation (WLE), is derived, and proved to be less biased than maximum likelihood estimation (MLE) with the same asymptotic variance and normal distribution. WLE removes the first order bias term from MLE. Two Monte Carlo studies compare WLE with MLE and Bayesian modal estimation (BME) of ability in conventional tests and tailored tests, assuming the item parameters are known constants. The Monte Carlo studies favor WLE over MLE and BME on several criteria over a wide range of the ability scale.  相似文献   

18.
Abstract

When estimating multiple regression models with incomplete predictor variables, it is necessary to specify a joint distribution for the predictor variables. A convenient assumption is that this distribution is a multivariate normal distribution, which is also the default in many statistical software packages. This distribution will in general be misspecified if predictors with missing data have nonlinear effects (e.g., x2) or are included in interaction terms (e.g., x·z). In the present article, we introduce a factored regression modeling approach for estimating regression models with missing data that is based on maximum likelihood estimation. In this approach, the model likelihood is factorized into a part that is due to the model of interest and a part that is due to the model for the incomplete predictors. In three simulation studies, we showed that the factored regression modeling approach produced valid estimates of interaction and nonlinear effects in regression models with missing values on categorical or continuous predictor variables under a broad range of conditions. We developed the R package mdmb, which facilitates a user-friendly application of the factored regression modeling approach, and present a real-data example that illustrates the flexibility of the software.  相似文献   

19.
We investigate under what conditions the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. We show that the two models will give similar matrices of factor loadings if Schneeweiss' condition, that the difference between the largest and the smallest value of unique variances is small relative to the sizes of the column sums of squared factor loadings, holds. Furthermore, we generalize our results and discus the conditions under which the matrix of factor loadings from the regular factor analysis model will be well approximated by the matrix of factor loadings from Jöreskog's image factor analysis model. Especially, we discuss Guttman's condition (i.e., the number of variables increases without limit) for the two models to agree, in relation with the condition we have shown, and conclude that Schneeweiss' condition is a generalization of Guttman's condition. Some implications for practice are discussed.Kentaro Hayashi is a visiting Assistant Professor, Department of Mathematics, Bucknell University, Lewisburg PA 17837, and Peter M. Bentler is Professor, Departments of Psychology and Statistics, University of California, Los Angeles CA 90095-1563. (Emails: Khayashi@bucknell.edu, bentler@ucla.edu) Parts of this paper were discussed in a session on Factor Analysis (J. ten Berge, Chair) at the IFCS-98 International Conference, Rome, July, 1998. This work was supported by National Institute on Drug Abuse grant DA 01070. The authors thank Professors Hans Schneeweiss and Ke-Hai Yuan, and four anonymous referees, for their invaluable comments which led to an improved version of this paper.  相似文献   

20.
Spiess  Martin  Jordan  Pascal  Wendt  Mike 《Psychometrika》2019,84(1):212-235

In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号