首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A central assumption that is implicit in estimating item parameters in item response theory (IRT) models is the normality of the latent trait distribution, whereas a similar assumption made in categorical confirmatory factor analysis (CCFA) models is the multivariate normality of the latent response variables. Violation of the normality assumption can lead to biased parameter estimates. Although previous studies have focused primarily on unidimensional IRT models, this study extended the literature by considering a multidimensional IRT model for polytomous responses, namely the multidimensional graded response model. Moreover, this study is one of few studies that specifically compared the performance of full-information maximum likelihood (FIML) estimation versus robust weighted least squares (WLS) estimation when the normality assumption is violated. The research also manipulated the number of nonnormal latent trait dimensions. Results showed that FIML consistently outperformed WLS when there were one or multiple skewed latent trait distributions. More interestingly, the bias of the discrimination parameters was non-ignorable only when the corresponding factor was skewed. Having other skewed factors did not further exacerbate the bias, whereas biases of boundary parameters increased as more nonnormal factors were added. The item parameter standard errors recovered well with both estimation algorithms regardless of the number of nonnormal dimensions.  相似文献   

2.
Randomization tests are valid alternatives to parametric tests like the t test and analysis of variance when the normality or random sampling assumptions of these tests are violated. Three SPSS programs are listed and described that will conduct approximate randomization tests for testing the null hypotheses that two or more means or distributions are the same or that two variables are independent (i.e., uncorrelated or “randomly associated”). The programs will work on both desktop and mainframe versions of SPSS. Although the SPSS programs are slower on desktop machines than software designed explicitly for randomization tests, these programs bring randomization tests into the reach of researchers who prefer the SPSS computing environment for data analysis.  相似文献   

3.
Relationships between the results of factor analysis and component analysis are derived when oblique factors have independent clusters with equal variances of unique factors. The factor loadings are analytically shown to be smaller than the corresponding component loadings while the factor correlations are shown to be greater than the corresponding component correlations. The condition for the inequality of the factor/component contributions is derived in the case with different variances for unique factors. Further, the asymptotic standard errors of parameter estimates are obtained for a simplified model with the assumption of multivariate normality, which shows that the component loading estimate is more stable than the corresponding factor loading estimate.  相似文献   

4.
Current practice in structural modeling of observed continuous random variables is limited to representation systems for first and second moments (e.g., means and covariances), and to distribution theory based on multivariate normality. In psychometrics the multinormality assumption is often incorrect, so that statistical tests on parameters, or model goodness of fit, will frequently be incorrect as well. It is shown that higher order product moments yield important structural information when the distribution of variables is arbitrary. Structural representations are developed for generalizations of the Bentler-Weeks, Jöreskog-Keesling-Wiley, and factor analytic models. Some asymptotically distribution-free efficient estimators for such arbitrary structural models are developed. Limited information estimators are obtained as well. The special case of elliptical distributions that allow nonzero but equal kurtoses for variables is discussed in some detail. The argument is made that multivariate normal theory for covariance structure models should be abandoned in favor of elliptical theory, which is only slightly more difficult to apply in practice but specializes to the traditional case when normality holds. Many open research areas are described.  相似文献   

5.
This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.  相似文献   

6.
Maximum likelihood estimation in the one‐factor model is based on the assumption of multivariate normality for the observed data. This general distributional assumption implies three specific assumptions for the parameters in the one‐factor model: the common factor has a normal distribution; the residuals are homoscedastic; and the factor loadings do not vary across the common factor scale. When any of these assumptions is violated, non‐normality arises in the observed data. In this paper, a model is presented based on marginal maximum likelihood to enable explicit tests of these assumptions. In addition, the model is suitable to incorporate the detected violations, to enable statistical modelling of these effects. Two simulation studies are reported in which the viability of the model is investigated. Finally, the model is applied to IQ data to demonstrate its practical utility as a means to investigate ability differentiation.  相似文献   

7.
8.
Given multivariate multiblock data (e.g., subjects nested in groups are measured on multiple variables), one may be interested in the nature and number of dimensions that underlie the variables, and in differences in dimensional structure across data blocks. To this end, clusterwise simultaneous component analysis (SCA) was proposed which simultaneously clusters blocks with a similar structure and performs an SCA per cluster. However, the number of components was restricted to be the same across clusters, which is often unrealistic. In this paper, this restriction is removed. The resulting challenges with respect to model estimation and selection are resolved.  相似文献   

9.
A new model for simultaneous component analysis (SCA) is introduced that contains the existing SCA models with common loading matrix as special cases. The new SCA-T3 model is a multi-set generalization of the Tucker3 model for component analysis of three-way data. For each mode (observational units, variables, sets) a different number of components can be chosen and the obtained solution can be rotated without loss of fit to facilitate interpretation. SCA-T3 can be fitted on centered multi-set data and also on the corresponding covariance matrices. For this purpose, alternating least squares algorithms are derived. SCA-T3 is evaluated in a simulation study, and its practical merits are demonstrated for several benchmark datasets.  相似文献   

10.
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate normality. For nonlinear PCA, however, standard options for establishing stability are not provided. The authors use the nonparametric bootstrap procedure to assess the stability of nonlinear PCA results, applied to empirical data. They use confidence intervals for the variable transformations and confidence ellipses for the eigenvalues, the component loadings, and the person scores. They discuss the balanced version of the bootstrap, bias estimation, and Procrustes rotation. To provide a benchmark, the same bootstrap procedure is applied to linear PCA on the same data. On the basis of the results, the authors advise using at least 1,000 bootstrap samples, using Procrustes rotation on the bootstrap results, examining the bootstrap distributions along with the confidence regions, and merging categories with small marginal frequencies to reduce the variance of the bootstrap results.  相似文献   

11.
Factor analysis is used in item selection in the hopes of producing a small number of factors each of which will represent a unidimensional sub- scale. If item analysis has been successful in producing truly independent subscales, it might be hoped that the number of factors would equal the number of subscales and that each factor would be highly defined by a single subscale. Factor analysis when used in studies of organization, is not assumed to produce factors that represent unidimensional scales. Rather, factor analysis is used to reveal various substructures that exist within an organization. If several variables are loaded on a single factor, the variables can be regarded as nodes of interaction between measured dimensions of organization.  相似文献   

12.
Principal component regression (PCR) is a popular technique in data analysis and machine learning. However, the technique has two limitations. First, the principal components (PCs) with the largest variances may not be relevant to the outcome variables. Second, the lack of standard error estimates for the unstandardized regression coefficients makes it hard to interpret the results. To address these two limitations, we propose a model-based approach that includes two mean and covariance structure models defined for multivariate PCR. By estimating the defined models, we can obtain inferential information that will allow us to test the explanatory power of individual PCs and compute the standard error estimates for the unstandardized regression coefficients. A real example is used to illustrate our approach, and simulation studies under normality and nonnormality conditions are presented to validate the standard error estimates for the unstandardized regression coefficients. Finally, future research topics are discussed.  相似文献   

13.
Guttman's assumption underlying his definition of “total images” is rejected: Partial images are not generally convergent everywhere. Even divergence everywhere is shown to be possible. The convergence type always found on partial images is convergence in quadratic mean; hence, total images are redefined as quadratic mean-limits. In determining the convergence type in special situations, the asymptotic properties of certain correlations are important, implying, in some cases, convergence almost everywhere, which is also effected by a countable population or multivariate normality or independent variables. The interpretations of a total image as a predictor, and a “common-factor score”, respectively, are made precise.  相似文献   

14.
Ab Mooijaart 《Psychometrika》1984,49(1):143-145
FACTALS is a nonmetric common factor analysis model for multivariate data whose variables may be nominal, ordinal or interval. In FACTALS an Alternating Least Squares algorithm is utilized which is claimed to be monotonically convergent.In this paper it is shown that this algorithm is based upon an erroneous assumption, namely that the least squares loss function (which is in this case a nonscale free loss function) can be transformed into a scalefree loss function. A consequence of this is that monotonical convergence of the algorithm can not be guaranteed.  相似文献   

15.
The current paper proposes a solution that generalizes ideas of Brown and Forsythe to the problem of comparing hypotheses in two-way classification designs with heteroscedastic error structure. Unlike the standard analysis of variance, the proposed approach does not require the homogeneity assumption. A comprehensive simulation study, in which sample size of the cells, relationship between the cell sizes and unequal variance, degree of variance heterogeneity, and population distribution shape were systematically manipulated, shows that the proposed approximation was generally robust when normality and heterogeneity were jointly violated.  相似文献   

16.
Formulas for the asymptotic biases of the parameter estimates in structural equation models are provided in the case of the Wishart maximum likelihood estimation for normally and nonnormally distributed variables. When multivariate normality is satisfied, considerable simplification is obtained for the models of unstandardized variables. Formulas for the models of standardized variables are also provided. Numerical examples with Monte Carlo simulations in factor analysis show the accuracy of the formulas and suggest the asymptotic robustness of the asymptotic biases with normality assumption against nonnormal data. Some relationships between the asymptotic biases and other asymptotic values are discussed.The author is indebted to the editor and anonymous reviewers for their comments, corrections, and suggestions on this paper, and to Yutaka Kano for discussion on biases.  相似文献   

17.
A simulation study investigated the effects of skewness and kurtosis on level-specific maximum likelihood (ML) test statistics based on normal theory in multilevel structural equation models. The levels of skewness and kurtosis at each level were manipulated in multilevel data, and the effects of skewness and kurtosis on level-specific ML test statistics were examined. When the assumption of multivariate normality was violated, the level-specific ML test statistics were inflated, resulting in Type I error rates that were higher than the nominal level for the correctly specified model. Q-Q plots of the test statistics against a theoretical chi-square distribution showed that skewness led to a thicker upper tail and kurtosis led to a longer upper tail of the observed distribution of the level-specific ML test statistic for the correctly specified model.  相似文献   

18.
The use of Candecomp to fit scalar products in the context of INDSCAL is based on the assumption that the symmetry of the data matrices involved causes the component matrices to be equal when Candecomp converges. Ten Berge and Kiers gave examples where this assumption is violated for Gramian data matrices. These examples are believed to be local minima. It is now shown that, in the single-component case, the assumption can only be violated at saddle points. Chances of Candecomp converging to a saddle point are small but still nonzero.  相似文献   

19.
How many hindsight biases are there?   总被引:1,自引:0,他引:1  
Blank H  Nestler S  von Collani G  Fischer V 《Cognition》2008,106(3):1408-1440
The answer is three: questioning a conceptual default assumption in hindsight bias research, we argue that the hindsight bias is not a unitary phenomenon but consists of three separable and partially independent subphenomena or components, namely, memory distortions, impressions of foreseeability and impressions of necessity. Following a detailed conceptual analysis including a systematic survey of hindsight characterizations in the published literature, we investigated these hindsight components in the context of political elections. We present evidence from three empirical studies that impressions of foreseeability and memory distortions (1) show hindsight effects that typically differ in magnitude and sometimes even in direction, (2) are essentially uncorrelated, and (3) are differentially influenced by extraneous variables. A fourth study found similar dissociations between memory distortions and impressions of necessity. All four studies thus provide support for a separate components view of the hindsight bias. An important consequence of such a view is that apparent contradictions in research findings as well as in theoretical explanations (e.g., cognitive vs. social-motivational) might be alleviated by taking differences between components into account. We also suggest conditions under which the components diverge or converge.  相似文献   

20.
Fei Gu  Hao Wu 《Psychometrika》2016,81(3):751-773
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号