首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A reliability coefficient for maximum likelihood factor analysis   总被引:54,自引:0,他引:54  
Maximum likelihood factor analysis provides an effective method for estimation of factor matrices and a useful test statistic in the likelihood ratio for rejection of overly simple factor models. A reliability coefficient is proposed to indicate quality of representation of interrelations among attributes in a battery by a maximum likelihood factor analysis. Usually, for a large sample of individuals or objects, the likelihood ratio statistic could indicate that an otherwise acceptable factor model does not exactly represent the interrelations among the attributes for a population. The reliability coefficient could indicate a very close representation in this case and be a better indication as to whether to accept or reject the factor solution. This research was supported by the Personnel and Training Research Programs Office of the Office of Naval Research under contract US NAVY/00014-67-A-0305-0003. Critical review of the development and suggestions by Richard Montanelli were most helpful.  相似文献   

2.
A Newton-Raphson algorithm for maximum likelihood factor analysis   总被引:1,自引:0,他引:1  
This paper demonstrates the feasibility of using a Newton-Raphson algorithm to solve the likelihood equations which arise in maximum likelihood factor analysis. The algorithm leads to clean easily identifiable convergence and provides a means of verifying that the solution obtained is at least a local maximum of the likelihood function. It is shown that a popular iteration algorithm is numerically unstable under conditions which are encountered in practice and that, as a result, inaccurate solutions have been presented in the literature. The key result is a computationally feasible formula for the second differential of a partially maximized form of the likelihood function. In addition to implementing the Newton-Raphson algorithm, this formula provides a means for estimating the asymptotic variances and covariances of the maximum likelihood estimators. This research was supported by the Air Force Office of Scientific Research, Grant No. AF-AFOSR-4.59-66 and by National Institutes of Health, Grant No. FR-3.  相似文献   

3.
A maximum likelihood approach is described for estimating the validity of a test (x) as a predictor of a criterion variable (y) when there are both missing and censoredy scores present in the data set. The missing data are due to selection on a latent variable (y s ) which may be conditionally related toy givenx. Thus, the missing data may not be missing random. The censoring process in due to the presence of a floor or ceiling effect. The maximum likelihood estimates are constructed using the EM algorithm. The entire analysis is demonstrated in terms of hypothetical data sets.  相似文献   

4.
A simulation study investigated the effects of skewness and kurtosis on level-specific maximum likelihood (ML) test statistics based on normal theory in multilevel structural equation models. The levels of skewness and kurtosis at each level were manipulated in multilevel data, and the effects of skewness and kurtosis on level-specific ML test statistics were examined. When the assumption of multivariate normality was violated, the level-specific ML test statistics were inflated, resulting in Type I error rates that were higher than the nominal level for the correctly specified model. Q-Q plots of the test statistics against a theoretical chi-square distribution showed that skewness led to a thicker upper tail and kurtosis led to a longer upper tail of the observed distribution of the level-specific ML test statistic for the correctly specified model.  相似文献   

5.
Tutorial on maximum likelihood estimation   总被引:2,自引:0,他引:2  
  相似文献   

6.
Under certain circumstances, it is theoretically important to decide whether a difference between two conditions in mean reaction time (RT) results from a relatively uniform slowing of all the responses in the slower condition or from a mixture of some slowed trials with some unslowed ones. This article describes a likelihood ratio test that can be used to differentiate between these two possibilities and reports computer simulations examining the power and Type I error rate of the test under conditions similar to those encountered in RT research. A freely available computer program, called MIXTEST, can be used both to carry out the likelihood ratio test and to conduct simulations evaluating the performance of the test within various settings.  相似文献   

7.
After introducing some extensions of a recently proposed probabilistic vector model for representing paired comparisons choice data, an iterative procedure for obtaining maximum likelihood estimates of the model parameters is developed. The possibility of testing various hypotheses by means of likelihood ratio tests is discussed. Finally, the algorithm is applied to some existing data sets for illustrative purposes.  相似文献   

8.
Moderation analysis is useful for addressing interesting research questions in social sciences and behavioural research. In practice, moderated multiple regression (MMR) models have been most widely used. However, missing data pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a non-linear function of the involved variables. Normal-distribution-based maximum likelihood (NML) has been proposed and applied for estimating MMR models with incomplete data. When data are missing completely at random, moderation effect estimates are consistent. However, simulation results have found that when data in the predictor are missing at random (MAR), NML can yield inaccurate estimates of moderation effects when the moderation effects are non-null. Simulation studies are subject to the limitation of confounding systematic bias with sampling errors. Thus, the purpose of this paper is to analytically derive asymptotic bias of NML estimates of moderation effects with MAR data. Results show that when the moderation effect is zero, there is no asymptotic bias in moderation effect estimates with either normal or non-normal data. When the moderation effect is non-zero, however, asymptotic bias may exist and is determined by factors such as the moderation effect size, missing-data proportion, and type of missingness dependence. Our analytical results suggest that researchers should apply NML to MMR models with caution when missing data exist. Suggestions are given regarding moderation analysis with missing data.  相似文献   

9.
Evidence is given to indicate that Lawley's formulas for the standard errors of maximum likelihood loading estimates do not produce exact asymptotic results. A small modification is derived which appears to eliminate this difficulty.The authors are indebted to Walter Kristof and Thomas Stroud for their helpful reviews of an earlier version of this paper and particularly to D. N. Lawley for his review, comments, and encouragement.  相似文献   

10.
11.
The absence of operational disaggregate lexicographic decision models and Tversky's observation that choice behavior is often inconsistent, hierarchical, and context dependent motivate the development of a maximum likelihood hierarchical (MLH) choice model. This new disaggregate choice model requires few assumptions and accommodates the three aspects of choice behavior noted by A. Tversky (1972, Journal of Mathematical Psychology, 9, 341–367). The model has its foundation in a prototype model developed by the authors. Unlike the deterministic prototype, however, MLH is a probabilistic model which generates maximum likelihood estimators of the aggregate “cutoff values.” The model is formulated as a concave programming problem whose solutions are therefore globally optimal. Finally, the model is applied to data from three separate studies where it is demonstrated to have superior performance over the prototype model in its predictive performance.  相似文献   

12.
Using the theory of pseudo maximum likelihood estimation the asymptotic covariance matrix of maximum likelihood estimates for mean and covariance structure models is given for the case where the variables are not multivariate normal. This asymptotic covariance matrix is consistently estimated without the computation of the empirical fourth order moment matrix. Using quasi-maximum likelihood theory a Hausman misspecification test is developed. This test is sensitive to misspecification caused by errors that are correlated with the independent variables. This misspecification cannot be detected by the test statistics currently used in covariance structure analysis.For helpful comments on a previous draft of the paper we are indebted to Kenneth A. Bollen, Ulrich L. Küsters, Michael E. Sobel and the anonymous reviewers of Psychometrika. For partial research support, the first author wishes to thank the Department of Sociology at the University of Arizona, where he was a visiting professor during the fall semester 1987.  相似文献   

13.
A general approach to confirmatory maximum likelihood factor analysis   总被引:17,自引:0,他引:17  
We describe a general procedure by which any number of parameters of the factor analytic model can be held fixed at any values and the remaining free parameters estimated by the maximum likelihood method. The generality of the approach makes it possible to deal with all kinds of solutions: orthogonal, oblique and various mixtures of these. By choosing the fixed parameters appropriately, factors can be defined to have desired properties and make subsequent rotation unnecessary. The goodness of fit of the maximum likelihood solution under the hypothesis represented by the fixed parameters is tested by a large samplex 2 test based on the likelihood ratio technique. A by-product of the procedure is an estimate of the variance-covariance matrix of the estimated parameters. From this, approximate confidence intervals for the parameters can be obtained. Several examples illustrating the usefulness of the procedure are given.This work was supported by a grant (NSF-GB 1985) from the National Science Foundation to Educational Testing Service.  相似文献   

14.
Multidimensional successive categories scaling: A maximum likelihood method   总被引:1,自引:0,他引:1  
A single-step maximum likelihood estimation procedure is developed for multidimensional scaling of dissimilarity data measured on rating scales. The procedure can fit the euclidian distance model to the data under various assumptions about category widths and under two distributional assumptions. The scoring algorithm for parameter estimation has been developed and implemented in the form of a computer program. Practical uses of the method are demonstrated with an emphasis on various advantages of the method as a statistical procedure.The research reported here was partly supported by Grant A6394 to the author by Natural Sciences and Engineering Research Council of Canada. Portions of this research were presented at the Psychometric Society meeting in Uppsala, Sweden, in June, 1978. MAXSCAL-2.1, a program to perform the computations discussed in this paper may be obtained from the author. Thanks are due to Jim Ramsay for his helpful comments.  相似文献   

15.
Luce introduced a family of learning models in which response probabilities are a function of some underlying continuous real variable. This variable can be represented as an additive function of the parameters of these learning models. Additive learning models have also been applied to signal-detection data. There are a wide variety of problems of contemporary psychophysics for which the assumption of a continuum of sensory states seems appropriate, and this family of learning models has a natural extension to such problems. One potential difficulty in the application of such models to data is that estimation of parameters requires the use of numerical procedures when the method of maximum likelihood is used. Given a likelihood function generated from an additive model, this paper gives sufficient conditions for log-concavity and strict log-concavity of the likelihood function. If a likelihood function is strictly log-concave, then any local maximum is a unique global maximum, and any solution to the likelihood equations is the unique global maximum point. These conditions are quite easy to evaluate in particular cases, and hence, the results should be quite useful. Some applications to Luce's beta model and to the signal-detection learning models of Dorfman and Biderman are presented.  相似文献   

16.
In a series of experiments, subjects judged the likelihood of events set either in the past or the future. No consistent differences were found in either the central tendency or the dispersion of subjects' likelihood judgments regarding past and future events which differed solely in their temporal setting. Temporal setting was, however, found to affect the production of possible event outcomes. These results contradict a “sure past” hypothesis, advanced by a number of observers, according to which judges are more confident in dealing with past than future events. They also eliminate a possible source of methodological difficulty in the interpretation of existing judgmental studies and provide some insight into the use of judgmental heuristics. Belief in the “sure past” hypothesis is discussed as the result of confusion between temporal setting and its ecological correlates.  相似文献   

17.
Two algorithms are described for marginal maximum likelihood estimation for the one-parameter logistic model. The more efficient of the two algorithms is extended to estimation for the linear logistic model. Numerical examples of both procedures are presented. Portions of this research were presented at the meeting of the Psychometric Society in Chapel Hill, N.C. in May, 1981. Thanks to R. Darrell Bock, Gerhard Fischer, and Paul Holland for helpful comments in the course of this research.  相似文献   

18.
The standard tobit or censored regression model is typically utilized for regression analysis when the dependent variable is censored. This model is generalized by developing a conditional mixture, maximum likelihood method for latent class censored regression. The proposed method simultaneously estimates separate regression functions and subject membership in K latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The proposed method is illustrated via a consumer psychology application.  相似文献   

19.
A jackknife-like procedure is developed for producing standard errors of estimate in maximum likelihood factor analysis. Unlike earlier methods based on information theory, the procedure developed is computationally feasible on larger problems. Unlike earlier methods based on the jackknife, the present procedure is not plagued by the factor alignment problem, the Heywood case problem, or the necessity to jackknife by groups. Standard errors may be produced for rotated and unrotated loading estimates using either orthogonal or oblique rotation as well as for estimates of unique factor variances and common factor correlations. The total cost for larger problems is a small multiple of the square of the number of variables times the number of observations used in the analysis. Examples are given to demonstrate the feasibility of the method.The research done by R. I. Jennrich was supported in part by NSF Grant MCS 77-02121. The research done by D. B. Clarkson was supported in part by NSERC Grant A3109.  相似文献   

20.
In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is desirable if the contingency table becomes too large to store. Special attention is given to loglinear IRT models that are used for the analysis of educational and psychological test data. To calculate the necessary expected sufficient statistics and other marginal sums of the table, a method is described that avoids summing large numbers of elementary cell frequencies by writing them out in terms of multiplicative model parameters and applying the distributive law of multiplication over summation. These algorithms are used in the computer program LOGIMO. The modified algorithms are illustrated with simulated data. The author thanks Wim J. van der Linden, Gideon J. Mellenberh and Namburi S. Raju for their valuable comments and suggestions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号