首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable. The EFA model is specified for these underlying continuous variables rather than the observed ordinal variables. Although these underlying continuous variables are not observed directly, their correlations can be estimated from the ordinal variables. These correlations are referred to as polychoric correlations. This article is concerned with ordinary least squares (OLS) estimation of parameters in EFA with polychoric correlations. Standard errors and confidence intervals for rotated factor loadings and factor correlations are presented. OLS estimates and the associated standard error estimates and confidence intervals are illustrated using personality trait ratings from 228 college students. Statistical properties of the proposed procedure are explored using a Monte Carlo study. The empirical illustration and the Monte Carlo study showed that (a) OLS estimation of EFA is feasible with large models, (b) point estimates of rotated factor loadings are unbiased, (c) point estimates of factor correlations are slightly negatively biased with small samples, and (d) standard error estimates and confidence intervals perform satisfactorily at moderately large samples.  相似文献   

2.
Data in psychology are often collected using Likert‐type scales, and it has been shown that factor analysis of Likert‐type data is better performed on the polychoric correlation matrix than on the product‐moment covariance matrix, especially when the distributions of the observed variables are skewed. In theory, factor analysis of the polychoric correlation matrix is best conducted using generalized least squares with an asymptotically correct weight matrix (AGLS). However, simulation studies showed that both least squares (LS) and diagonally weighted least squares (DWLS) perform better than AGLS, and thus LS or DWLS is routinely used in practice. In either LS or DWLS, the associations among the polychoric correlation coefficients are completely ignored. To mend such a gap between statistical theory and empirical work, this paper proposes new methods, called ridge GLS, for factor analysis of ordinal data. Monte Carlo results show that, for a wide range of sample sizes, ridge GLS methods yield uniformly more accurate parameter estimates than existing methods (LS, DWLS, AGLS). A real‐data example indicates that estimates by ridge GLS are 9–20% more efficient than those by existing methods. Rescaled and adjusted test statistics as well as sandwich‐type standard errors following the ridge GLS methods also perform reasonably well.  相似文献   

3.
A Monte Carlo simulation was conducted to investigate the robustness of 4 latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of nonnormality of the observed exogenous variables. Results showed that the CPI and LMS approaches yielded biased estimates of the interaction effect when the exogenous variables were highly nonnormal. When the violation of nonnormality was not severe (normal; symmetric with excess kurtosis < 1), the LMS approach yielded the most efficient estimates of the latent interaction effect with the highest statistical power. In highly nonnormal conditions, the GAPI and UPI approaches with maximum likelihood (ML) estimation yielded unbiased latent interaction effect estimates, with acceptable actual Type I error rates for both the Wald and likelihood ratio tests of interaction effect at N ≥ 500. An empirical example illustrated the use of the 4 approaches in testing a latent variable interaction between academic self-efficacy and positive family role models in the prediction of academic performance.  相似文献   

4.
We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non‐normality (normal, moderately, and extremely non‐normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.  相似文献   

5.
Use of subject scores as manifest variables to assess the relationship between latent variables produces attenuated estimates. This has been demonstrated for raw scores from classical test theory (CTT) and factor scores derived from factor analysis. Conclusions on scores have not been sufficiently extended to item response theory (IRT) theta estimates, which are still recommended for estimation of relationships between latent variables. This is because IRT estimates appear to have preferable properties compared to CTT, while structural equation modeling (SEM) is often advised as an alternative to scores for estimation of the relationship between latent variables. The present research evaluates the consequences of using subject scores as manifest variables in regression models to test the relationship between latent variables. Raw scores and three methods for obtaining theta estimates were used and compared to latent variable SEM modeling. A Monte Carlo study was designed by manipulating sample size, number of items, type of test, and magnitude of the correlation between latent variables. Results show that, despite the advantage of IRT models in other areas, estimates of the relationship between latent variables are always more accurate when SEM models are used. Recommendations are offered for applied researchers.  相似文献   

6.
This paper presents a new polychoric instrumental variable (PIV) estimator to use in structural equation models (SEMs) with categorical observed variables. The PIV estimator is a generalization of Bollen’s (Psychometrika 61:109–121, 1996) 2SLS/IV estimator for continuous variables to categorical endogenous variables. We derive the PIV estimator and its asymptotic standard errors for the regression coefficients in the latent variable and measurement models. We also provide an estimator of the variance and covariance parameters of the model, asymptotic standard errors for these, and test statistics of overall model fit. We examine this estimator via an empirical study and also via a small simulation study. Our results illustrate the greater robustness of the PIV estimator to structural misspecifications than the system-wide estimators that are commonly applied in SEMs. Kenneth Bollen gratefully acknowledges support from NSF SES 0617276, NIDA 1-RO1-DA13148-01, and DA013148-05A2. Albert Maydeu-Olivares was supported by the Department of Universities, Research and Information Society (DURSI) of the Catalan Government, and by grant BSO2003-08507 from the Spanish Ministry of Science and Technology. We thank Sharon Christ, John Hipp, and Shawn Bauldry for research assistance. The comments of the members of the Carolina Structural Equation Modeling (CSEM) group are greatly appreciated. An earlier version of this paper under a different title was presented by K. Bollen at the Psychometric Society Meetings, June, 2002, Chapel Hill, North Carolina.  相似文献   

7.
Discretized multivariate normal structural models are often estimated using multistage estimation procedures. The asymptotic properties of parameter estimates, standard errors, and tests of structural restrictions on thresholds and polychoric correlations are well known. It was not clear how to assess the overall discrepancy between the contingency table and the model for these estimators. It is shown that the overall discrepancy can be decomposed into a distributional discrepancy and a structural discrepancy. A test of the overall model specification is proposed, as well as a test of the distributional specification (i.e., discretized multivariate normality). Also, the small sample performance of overall, distributional, and structural tests, as well as of parameter estimates and standard errors is investigated under conditions of correct model specification and also under mild structural and/or distributional misspecification. It is found that relatively small samples are needed for parameter estimates, standard errors, and structural tests. Larger samples are needed for the distributional and overall tests. Furthermore, parameter estimates, standard errors, and structural tests are surprisingly robust to distributional misspecification. This research was supported by the Department of Universities, Research and Information Society (DURSI) of the Catalan Government, and by grants BSO2000-0661 and BSO2003-08507 of the Spanish Ministry of Science and Technology.  相似文献   

8.
This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.  相似文献   

9.
Many variables that are used in social and behavioural science research are ordinal categorical or polytomous variables. When more than one polytomous variable is involved in an analysis, observations are classified in a contingency table, and a commonly used statistic for describing the association between two variables is the polychoric correlation. This paper investigates the estimation of the polychoric correlation when the data set consists of misclassified observations. Two approaches for estimating the polychoric correlation have been developed. One assumes that the probabilities in relation to misclassification are known, and the other uses a double sampling scheme to obtain information on misclassification. A parameter estimation procedure is developed, and statistical properties for the estimates are discussed. The practicability and applicability of the proposed approaches are illustrated by analysing data sets that are based on real and generated data. Excel programmes with visual basic for application (VBA) have been developed to compute the estimate of the polychoric correlation and its standard error. The use of the structural equation modelling programme Mx to find parameter estimates in the double sampling scheme is discussed.  相似文献   

10.
A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies.  相似文献   

11.
This paper examines the implications of violating assumptions concerning the continuity and distributional properties of data in establishing measurement models in social science research. The General Health Questionnaire-12 uses an ordinal response scale. Responses to the GHQ-12 from 201 Hong Kong immigrants on arrival in Australia showed that the data were not normally distributed. A series of confirmatory factor analyses using either a Pearson product-moment or a polychoric correlation input matrix and employing either maximum likelihood, weighted least squares or diagonally weighted least squares estimation methods were conducted on the data. The parameter estimates and goodness-of-fit statistics provided support for using polychoric correlations and diagonally weighted least squares estimation when analyzing ordinal, nonnormal data.  相似文献   

12.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

13.
While conventional hierarchical linear modeling is applicable to purely hierarchical data, a multiple membership random effects model (MMrem) is appropriate for nonpurely nested data wherein some lower-level units manifest mobility across higher-level units. Although a few recent studies have investigated the influence of cluster-level residual nonnormality on hierarchical linear modeling estimation for purely hierarchical data, no research has examined the statistical performance of an MMrem given residual non-normality. The purpose of the present study was to extend prior research on the influence of residual non-normality from purely nested data structures to multiple membership data structures. Employing a Monte Carlo simulation study, this research inquiry examined two-level MMrem parameter estimate biases and inferential errors. Simulation factors included the level-two residual distribution, sample sizes, intracluster correlation coefficient, and mobility rate. Results showed that estimates of fixed effect parameters and the level-one variance component were robust to level-two residual non-normality. The level-two variance component, however, was sensitive to level-two residual non-normality and sample size. Coverage rates of the 95% credible intervals deviated from the nominal value assumed when level-two residuals were non-normal. These findings can be useful in the application of an MMrem to account for the contextual effects of multiple higher-level units.  相似文献   

14.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

15.
This paper proposes test statistics based on the likelihood ratio principle for testing equality of proportions in correlated data with additional incomplete samples. Powers of these tests are compared through Monte Carlo simulation with those of tests proposed recently by Ekbohm (based on an unbiased estimator) and Campbell (based on a Pearson-Chi-squared type statistic). Even though tests based on the maximum likelihood principle are theoretically expected to be superior to others, at least asymptotically, results from our simulations show that the gain in power could only be slight.  相似文献   

16.
Applications of item response theory, which depend upon its parameter invariance property, require that parameter estimates be unbiased. A new method, weighted likelihood estimation (WLE), is derived, and proved to be less biased than maximum likelihood estimation (MLE) with the same asymptotic variance and normal distribution. WLE removes the first order bias term from MLE. Two Monte Carlo studies compare WLE with MLE and Bayesian modal estimation (BME) of ability in conventional tests and tailored tests, assuming the item parameters are known constants. The Monte Carlo studies favor WLE over MLE and BME on several criteria over a wide range of the ability scale.  相似文献   

17.
The generalized graded unfolding model (GGUM) is capable of analyzing polytomous scored, unfolding data such as agree‐disagree responses to attitude statements. In the present study, we proposed a GGUM with structural equation for subject parameters, which enabled us to evaluate the relation between subject parameters and covariates and/or latent variables simultaneously, in order to avoid the influence of attenuation. Additionally, an algorithm for parameter estimation is newly implemented via the Markov Chain Monte Carlo (MCMC) method, based on Bayesian statistics. In the simulation, we compared the accuracy of estimates of regression coefficients between the proposed model and a conventional method using a GGUM (where regression coefficients are estimated using estimates of θ). As a result, the proposed model performed much better than the conventional method in terms of bias and root mean squared errors of estimates of regression coefficients. The study concluded by verifying the efficacy of the proposed model, using an actual data example of attitude measurement.  相似文献   

18.
This paper reports on a simulation study that evaluated the performance of five structural equation model test statistics appropriate for categorical data. Both Type I error rate and power were investigated. Different model sizes, sample sizes, numbers of categories, and threshold distributions were considered. Statistics associated with both the diagonally weighted least squares (cat‐DWLS) estimator and with the unweighted least squares (cat‐ULS) estimator were studied. Recent research suggests that cat‐ULS parameter estimates and robust standard errors slightly outperform cat‐DWLS estimates and robust standard errors ( Forero, Maydeu‐Olivares, & Gallardo‐Pujol, 2009 ). The findings of the present research suggest that the mean‐ and variance‐adjusted test statistic associated with the cat‐ULS estimator performs best overall. A new version of this statistic now exists that does not require a degrees‐of‐freedom adjustment ( Asparouhov & Muthén, 2010 ), and this statistic is recommended. Overall, the cat‐ULS estimator is recommended over cat‐DWLS, particularly in small to medium sample sizes.  相似文献   

19.
Ayala Cohen 《Psychometrika》1986,51(3):379-391
A test is proposed for the equality of the variances ofk 2 correlated variables. Pitman's test fork = 2 reduces the null hypothesis to zero correlation between their sum and their difference. Its extension, eliminating nuisance parameters by a bootstrap procedure, is valid for any correlation structure between thek normally distributed variables. A Monte Carlo study for several combinations of sample sizes and number of variables is presented, comparing the level and power of the new method with previously published tests. Some nonnormal data are included, for which the empirical level tends to be slightly higher than the nominal one. The results show that our method is close in power to the asymptotic tests which are extremely sensitive to nonnormality, yet it is robust and much more powerful than other robust tests.This research was supported by the fund for the promotion of research at the Technion.  相似文献   

20.
Structural equation modeling is a well-known technique for studying relationships among multivariate data. In practice, high dimensional nonnormal data with small to medium sample sizes are very common, and large sample theory, on which almost all modeling statistics are based, cannot be invoked for model evaluation with test statistics. The most natural method for nonnormal data, the asymptotically distribution free procedure, is not defined when the sample size is less than the number of nonduplicated elements in the sample covariance. Since normal theory maximum likelihood estimation remains defined for intermediate to small sample size, it may be invoked but with the probable consequence of distorted performance in model evaluation. This article studies the small sample behavior of several test statistics that are based on maximum likelihood estimator, but are designed to perform better with nonnormal data. We aim to identify statistics that work reasonably well for a range of small sample sizes and distribution conditions. Monte Carlo results indicate that Yuan and Bentler's recently proposed F-statistic performs satisfactorily.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号