共查询到20条相似文献,搜索用时 15 毫秒
1.
Yoshio Takane 《Psychometrika》1987,52(4):493-513
Cross-classified data are frequently encountered in behavioral and social science research. The loglinear model and dual scaling (correspondence analysis) are two representative methods of analyzing such data. An alternative method, based on ideal point discriminant analysis (DA), is proposed for analysis of contingency tables, which in a certain sense encompasses the two existing methods. A variety of interesting structures can be imposed on rows and columns of the tables through manipulations of predictor variables and/or as direct constraints on model parameters. This, along with maximum likelihood estimation of the model parameters, allows interesting model comparisons. This is illustrated by the analysis of several data sets.Presented as the Presidential Address to the Psychometric Society's Annual and European Meetings, June, 1987. Preparation of this paper was supported by grant A6394 from the Natural Sciences and Engineering Research Council of Canada. Thanks are due to Chikio Hayashi of University of the Air in Japan for providing the ISM data, and to Jim Ramsay and Ivo Molenaar for their helpful comments on an earlier draft of this paper. 相似文献
2.
Correspondence analysis of incomplete contingency tables 总被引:1,自引:0,他引:1
Correspondence analysis can be described as a technique which decomposes the departure from independence in a two-way contingency table. In this paper a form of correspondence analysis is proposed which decomposes the departure from the quasi-independence model. This form seems to be a good alternative to ordinary correspondence analysis in cases where the use of the latter is either impossible or not recommended, for example, in case of missing data or structural zeros. It is shown that Nora's reconstitution of order zero, a procedure well-known in the French literature, is formally identical to our correspondence analysis of incomplete tables. Therefore, reconstitution of order zero can also be interpreted as providing a decomposition of the residuals from the quasi-independence model. Furthermore, correspondence analysis of incomplete tables can be performed using existing programs for ordinary correspondence analysis. 相似文献
3.
Wayne S. DeSarbo 《Psychometrika》1981,46(3):307-329
The interrelationships between two sets of measurements made on the same subjects can be studied by canonical correlation. Originally developed by Hotelling [1936], the canonical correlation is the maximum correlation betweenlinear functions (canonical factors) of the two sets of variables. An alternative statistic to investigate the interrelationships between two sets of variables is the redundancy measure, developed by Stewart and Love [1968]. Van Den Wollenberg [1977] has developed a method of extracting factors which maximize redundancy, as opposed to canonical correlation.A component method is presented which maximizes user specified convex combinations of canonical correlation and the two nonsymmetric redundancy measures presented by Stewart and Love. Monte Carlo work comparing canonical correlation analysis, redundancy analysis, and various canonical/redundancy factoring analyses on the Van Den Wollenberg data is presented. An empirical example is also provided.Wayne S. DeSarbo is a Member of Technical Staff at Bell Laboratories in the Mathematics and Statistics Research Group at Murray Hill, N.J. I wish to express my appreciation to J. Kettenring, J. Kruskal, C. Mallows, and R. Gnanadesikan for their valuable technical assistance and/or for comments on an earlier draft of this paper. I also wish to thank the editor and reviewers of this paper for their insightful remarks. 相似文献
4.
When measuring the same variables on different occasions, two procedures for canonical analysis with stationary compositing weights are developed. The first, SUMCOV, maximizes the sum of the covariances of the canonical variates subject to norming constraints. The second, COLLIN, maximizes the largest root of the covariances of the canonical variates subject to norming constraints. A characterization theorem establishes a model building approach. Both methods are extended to allow for Cohort Sequential Designs. Finally a numerical illustration utilizing Nesselroade and Baltes data is presented.The authors wish to thank John Nesselroade for permitting us to use the data whose analysis we present. 相似文献
5.
Vartan Choulakian 《Psychometrika》1988,53(2):235-250
Goodman's (1979, 1981, 1985) loglinear formulation for bi-way contingency tables is extended to tables with or without missing cells and is used for exploratory purposes. A similar formulation is done for three-way tables and generalizations of correspondence analysis are deduced. A generalized version of Goodman's algorithm, based on Newton's elementary unidimensional method is used to estimate the scores in all cases.This research was partially supported by National Science and Engineering Research Council of Canada, Grant No. A8724. The author is grateful to the reviewers and the editor for helpful comments. 相似文献
6.
A comprehensive approach for imposing both row and column constraints on multivariate discrete data is proposed that may be called generalized constrained multiple correspondence analysis (GCMCA). In this method each set of discrete data is first decomposed into several submatrices according to its row and column constraints, and then multiple correspondence analysis (MCA) is applied to the decomposed submatrices to explore relationships among them. This method subsumes existing constrained and unconstrained MCA methods as special cases and also generalizes various kinds of linearly constrained correspondence analysis methods. An example is given to illustrate the proposed method.Heungsun Hwang is now at Claes Fornell International Group. The work reported in this paper was supported by Grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the second author. 相似文献
7.
EM algorithms for ML factor analysis 总被引:11,自引:0,他引:11
The details of EM algorithms for maximum likelihood factor analysis are presented for both the exploratory and confirmatory models. The algorithm is essentially the same for both cases and involves only simple least squares regression operations; the largest matrix inversion required is for aq ×q symmetric matrix whereq is the matrix of factors. The example that is used demonstrates that the likelihood for the factor analysis model may have multiple modes that are not simply rotations of each other; such behavior should concern users of maximum likelihood factor analysis and certainly should cast doubt on the general utility of second derivatives of the log likelihood as measures of precision of estimation. 相似文献
8.
Carolyn J. Anderson 《Psychometrika》1996,61(3):465-483
TheRC(M) association model (Goodman, 1979, 1985, 1986, 1991) is useful for analyzing the relationship between the variables of a 2-way cross-classification. The models presented here are generalizations of theRC(M) association model for 3-way tables. The family of models proposed here, 3-mode association models, use Tucker's 3-mode components model (Tucker, 1964, 1966; Kroonenberg, 1983) to represent either the three factor interaction or the combined effects of two and three factor interactions. An example from a study in developmental psychology (Kramer & Gottman, 1992) is provided to illustrate the usefulness of the proposed models.I thank Stanley Wasserman, Laurie Kramer, Ulf Böckenholt, Larwence Hubert, Jeffrey Tanaka, and five anonymous reviewers for valuable comments. 相似文献
9.
Data are ipsative if they are subject to a constant-sum constraint for each individual. In the present study, ordinal ipsative data (OID) are defined as the ordinal rankings across a vector of variables. It is assumed that OID are the manifestations of their underlying nonipsative vector y, which are difficult to observe directly. A two-stage estimation procedure is suggested for the analysis of structural equation models with OID. In the first stage, the partition maximum likelihood (PML) method and the generalized least squares (GLS) method are proposed for estimating the means and the covariance matrix of Acy, where Ac is a known contrast matrix. Based on the joint asymptotic distribution of the first stage estimator and an appropriate weight matrix, the generalized least squares method is used to estimate the structural parameters in the second stage. A goodness-of-fit statistic is given for testing the hypothesized covariance structure. Simulation results show that the proposed method works properly when a sufficiently large sample is available.This research was supported by National Institute on Drug Abuse Grants DA01070 and DA10017. The authors are indebted to Dr. Lee Cooper, Dr. Eric Holman, Dr. Thomas Wickens for their valuable suggestions on this study, and Dr. Fanny Cheung for allowing us to use her CPAI data set in this article. The authors would also like to acknowledge the helpful comments from the editor and the two anonymous reviewers. 相似文献
10.
Multiple-set canonical correlation analysis (Generalized CANO or GCANO for short) is an important technique because it subsumes
a number of interesting multivariate data analysis techniques as special cases. More recently, it has also been recognized
as an important technique for integrating information from multiple sources. In this paper, we present a simple regularization
technique for GCANO and demonstrate its usefulness. Regularization is deemed important as a way of supplementing insufficient
data by prior knowledge, and/or of incorporating certain desirable properties in the estimates of parameters in the model.
Implications of regularized GCANO for multiple correspondence analysis are also discussed. Examples are given to illustrate
the use of the proposed technique.
The work reported in this paper is supported by Grants 10630 and 290439 from the Natural Sciences and Engineering Research
Council of Canada to the first and the second authors, respectively. The authors would like to thank the two editors (old
and new), the associate editor, and four anonymous reviewers for their insightful comments on earlier versions of this paper.
Matlab programs that carried out the computations reported in the paper are available upon request. 相似文献
11.
Prof. Gerhard Derflinger 《Psychometrika》1984,49(3):325-330
Most of the factor solutions can be got by minimizing a corresponding loss function. However, up to now, a loss function for the alpha factor analysis (AFA) has not been known. The present paper establishes such a loss function for the AFA. Some analogies to the maximum likelihood factor analysis are discussed.The author is greatly indebted to Prof. Henry F. Kaiser (University of California, Berkeley) for his kind encouragement. He is also indebted to an anonymous referee ofPsychometrika for having confronted him with the problem in 1977. Financial support by the Wiener Hochschuljubiläumsstiftung is gratefully acknowledged. 相似文献
12.
Michael W. Browne 《Psychometrika》1988,53(4):585-589
Algebraic properties of the normal theory maximum likelihood solution in factor analysis regression are investigated. Two commonly employed measures of the within sample predictive accuracy of the factor analysis regression function are considered: the variance of the regression residuals and the squared correlation coefficient between the criterion variable and the regression function. It is shown that this within sample residual variance and within sample squared correlation may be obtained directly from the factor loading and unique variance estimates, without use of the original observations or the sample covariance matrix. 相似文献
13.
This paper shows essential equivalences among several methods of linearly constrained correspondence analysis. They include Fisher's method of additive scoring, Hayashi's second type of quantification method, ter Braak's canonical correspondence analysis, Nishisato's type of quantification method, ter Braak's canonical correspondence analysis, Nishisato's ANOVA of categorical data, correspondence analysis of manipulated contingency tables, Böckenholt and Böckenholt's least squares canonical analysis with linear constraints, and van der Heijden and Meijerink's zero average restrictions. These methods fall into one of two classes of methods corresponding to two alternative ways of imposing linear constraints, the reparametrization method and the null space method. A connection between the two is established through Khatri's lemma.The work reported in this paper has been supported by grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the first author. We wish to thank Carolyn Anderson, Ulf Böckenholt, Henk Kiers, Shizuhiko Nishisato, Jim Ramsay, Tadashi Shibayama, Cajo ter Braak, and Peter van der Heijden for their helpful comments on earlier drafts of this paper. 相似文献
14.
Classical factor analysis assumes a random sample of vectors of observations. For clustered vectors of observations, such as data for students from colleges, or individuals within households, it may be necessary to consider different within-group and between-group factor structures. Such a two-level model for factor analysis is defined, and formulas for a scoring algorithm for estimation with this model are derived. A simple noniterative method based on a decomposition of the total sums of squares and crossproducts is discussed. This method provides a suitable starting solution for the iterative algorithm, but it is also a very good approximation to the maximum likelihood solution. Extensions for higher levels of nesting are indicated. With judicious application of quasi-Newton methods, the amount of computation involved in the scoring algorithm is moderate even for complex problems; in particular, no inversion of matrices with large dimensions is involved. The methods are illustrated on two examples.Suggestions and corrections of three anonymous referees and of an Associate Editor are acknowledged. Discussions with Bob Jennrich on computational aspects were very helpful. Most of research leading to this paper was carried out while the first author was a visiting associate professor at the University of California, Los Angeles. 相似文献
15.
For multiple populatios, a longtidinal factor analytic model which is entirely exploratory, that is, no explicit identification constraints, is proposed. Factorial collapse and period/practice effects are allowed. An invariant and/or stationary factor pattern is permitted. This model is formulated stochastically. To implement this model a stagewise EM algorithm is developed. Finally a numerical illustration utilizing Nesselroade and Baltes' data is presented.The authors wish to thank Barbara Mellers and Henry Kaiser for their helpful comments and John Nesselroade for providing us the data for our illustration. This research wwa supported in part by a grant (No. AG03164) from the National Institute on Aging to William Meredith. Details of the derivations and a copy of the PROC MATRIX program are available upon request from the first author. 相似文献
16.
Agostinho Almeida 《Studia Logica》2009,91(2):171-199
This work is part of a wider investigation into lattice-structured algebras and associated dual representations obtained via the methodology of canonical extensions. To this end, here we study lattices, not necessarily distributive, with negation operations.We consider equational classes of lattices equipped with a negation operation ¬ which is dually self-adjoint (the pair (¬,¬) is a Galois connection) and other axioms are added so as to give classes of lattices in which the negation is De Morgan, orthonegation, antilogism, pseudocomplementation or weak pseudocomplementation. These classes are shown to be canonical and dual relational structures are given in a generalized Kripke-style. The fact that the negation is dually self-adjoint plays an important role here, as it implies that it sends arbitrary joins to meets and that will allow us to define the dual structures in a uniform way.Among these classes, all but one—that of lattices with a negation which is an antilogism—were previously studied by W. Dzik, E. Or?owska and C. van Alten using Urquhart duality.In some cases in which a given axiom does not imply that negation is dually self-adjoint, canonicity is proven with the weaker assumption of antitonicity of the negation. 相似文献
17.
Leo A. Goodman 《Psychometrika》1979,44(1):123-128
In this note, we describe the iterative procedure introduced earlier by Goodman to calculate the maximum likelihood estimates of the parameters in latent structure analysis, and we provide here a simple and direct proof of the fact that the parameter estimates obtained with the iterative procedure cannot lie outside the allowed interval. Formann recently stated that Goodman's algorithm can yield parameter estimates that lie outside the allowed interval, and we prove in the present note that Formann's contention is incorrect.This research was supported in part by Research Contract No. NSF SOC 76-80389 from the Division of the Social Sciences of the National Science Foundation. The author is indebted to C. C. Clogg for helpful comments and for the numerical results reported here (see, e.g., Table 1). 相似文献
18.
Dr. Anton K. Formann 《Psychometrika》1978,43(1):123-126
As the literature indicates, no method is presently available which takes explicitly into account that the parameters of Lazarsfeld's latent class analysis are defined as probabilities and are therefore restricted to the interval [0, 1]. In the present paper an appropriate transform on the parameters is performed in order to satisfy this constraint, and the estimation of the transformed parameters according to the maximum likelihood principle is outlined. In the sequel, a numerical example is given for which the basis solution and the usual maximum likelihood method failed. The different results are compared and the advantages of the proposed method discussed. 相似文献
19.
A plausibles-factor solution for many types of psychological and educational tests is one that exhibits a general factor ands − 1 group or method related factors. The bi-factor solution results from the constraint that each item has a nonzero loading
on the primary dimension and at most one of thes − 1 group factors. This paper derives a bi-factor item-response model for binary response data. In marginal maximum likelihood
estimation of item parameters, the bi-factor restriction leads to a major simplification of likelihood equations and (a) permits
analysis of models with large numbers of group factors; (b) permits conditional dependence within identified subsets of items;
and (c) provides more parsimonious factor solutions than an unrestricted full-information item factor analysis in some cases.
Supported by the Cognitive Science Program, Office of Naval Research, Under grant #N00014-89-J-1104. We would like to thank
Darrell Bock for several helpful suggestions. 相似文献
20.
A Monte Carlo experiment is conducted to investigate the performance of the bootstrap methods in normal theory maximum likelihood factor analysis both when the distributional assumption is satisfied and unsatisfied. The parameters and their functions of interest include unrotated loadings, analytically rotated loadings, and unique variances. The results reveal that (a) bootstrap bias estimation performs sometimes poorly for factor loadings and nonstandardized unique variances; (b) bootstrap variance estimation performs well even when the distributional assumption is violated; (c) bootstrap confidence intervals based on the Studentized statistics are recommended; (d) if structural hypothesis about the population covariance matrix is taken into account then the bootstrap distribution of the normal theory likelihood ratio test statistic is close to the corresponding sampling distribution with slightly heavier right tail.This study was carried out in part under the ISM cooperative research program (91-ISM · CRP-85, 92-ISM · CRP-102). The authors would like to thank the editor and three reviewers for their helpful comments and suggestions which improved the quality of this paper considerably. 相似文献