首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Maxbet method is an alternative to the method of generalized canonical correlation analysis and of Procrustes analysis. Contrary to these methods, it does not maximize the inner products (covariances) between linear composites, but also takes their sums of squares (variances) into account. It is well-known that the Maxbet algorithm, which has been proven to converge monotonically, may converge to local maxima. The present paper discusses an eigenvalue criterion which is sufficient, but not necessary for global optimality. However, in two special cases, the eigenvalue criterion is shown to be necessary and sufficient for global optimality. The first case is when there are only two data sets involved; the second case is when the inner products between all variables involved are positive, regardless of the number of data sets.The authors are obliged to Henk Kiers for critical comments on a previous draft.  相似文献   

2.
Canonical analysis of two convex polyhedral cones and applications   总被引:1,自引:0,他引:1  
Canonical analysis of two convex polyhedral cones consists in looking for two vectors (one in each cone) whose square cosine is a maximum. This paper presents new results about the properties of the optimal solution to this problem, and also discusses in detail the convergence of an alternating least squares algorithm. The set of scalings of an ordinal variable is a convex polyhedral cone, which thus plays an important role in optimal scaling methods for the analysis of ordinal data. Monotone analysis of variance, and correspondence analysis subject to an ordinal constraint on one of the factors are both canonical analyses of a convex polyhedral cone and a subspace. Optimal multiple regression of a dependent ordinal variable on a set of independent ordinal variables is a canonical analysis of two convex polyhedral cones as long as the signs of the regression coefficients are given. We discuss these three situations and illustrate them by examples.  相似文献   

3.
We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the method solves a matrix eigen-analysis problem through the adoption of a basis expansion approach to approximating data and weight functions. We apply the proposed method to functional magnetic resonance imaging (fMRI) data to identify networks of neural activity that are commonly activated across subjects while carrying out a working memory task.  相似文献   

4.
5.
A 2 × 2 chi-square can be computed from a phi coefficient, which is the Pearson correlation between two binomial variables. Similarly, chi-square for larger contingency tables can be computed from canonical correlation coefficients. The authors address the following series of issues involving this relationship: (a) how to represent a contingency table in terms of a correlation matrix involving r - 1 row and c - 1 column dummy predictors; (b) how to compute chi-square from canonical correlations solved from this matrix; (c) how to compute loadings for the omitted row and column variables; and (d) the possible interpretive advantage of describing canonical relationships that comprise chi-square, together with some examples. The proposed procedures integrate chi-square analysis of contingency tables with general correlational theory and serve as an introduction to some recent methods of analysis more widely known by sociologists.  相似文献   

6.
A 2 x 2 chi-square can be computed from a phi coefficient, which is the Pearson correlation between two binomial variables. Similarly, chi-square for larger contingency tables can be computed from canonical correlation coefficients. The authors address the following series of issues involving this relationship: (a) how to represent a contingency table in terms of a correlation matrix involving r - 1 row and c - 1 column dummy predictors; (b) how to compute chi-square from canonical correlations solved from this matrix; (c) how to compute loadings for the omitted row and column variables; and (d) the possible interpretive advantage of describing canonical relationships that comprise chi-square, together with some examples. The proposed procedures integrate chi-square analysis of contingency tables with general correlational theory and serve as an introduction to some recent methods of analysis more widely known by sociologists.  相似文献   

7.
The aim of this study was to demonstrate how personality test data can be plotted with a multivariate method known as Partial Least Squares of Latent Structures (PLS). The basic methodology behind PLS modeling is presented and the example demonstrates how a PLS model of personality test data can be used for diagnostic prediction. Principles for validating the models are also presented. The conclusion is that PLS modeling appears to be a powerful method for extracting clinically relevant information from complex personality test data matrixes. It could be used as a complement to more hard modeling methods in the process of examining a new area of interest.  相似文献   

8.
The interrelationships between two sets of measurements made on the same subjects can be studied by canonical correlation. Originally developed by Hotelling [1936], the canonical correlation is the maximum correlation betweenlinear functions (canonical factors) of the two sets of variables. An alternative statistic to investigate the interrelationships between two sets of variables is the redundancy measure, developed by Stewart and Love [1968]. Van Den Wollenberg [1977] has developed a method of extracting factors which maximize redundancy, as opposed to canonical correlation.A component method is presented which maximizes user specified convex combinations of canonical correlation and the two nonsymmetric redundancy measures presented by Stewart and Love. Monte Carlo work comparing canonical correlation analysis, redundancy analysis, and various canonical/redundancy factoring analyses on the Van Den Wollenberg data is presented. An empirical example is also provided.Wayne S. DeSarbo is a Member of Technical Staff at Bell Laboratories in the Mathematics and Statistics Research Group at Murray Hill, N.J. I wish to express my appreciation to J. Kettenring, J. Kruskal, C. Mallows, and R. Gnanadesikan for their valuable technical assistance and/or for comments on an earlier draft of this paper. I also wish to thank the editor and reviewers of this paper for their insightful remarks.  相似文献   

9.
Multiple‐set canonical correlation analysis and principal components analysis are popular data reduction techniques in various fields, including psychology. Both techniques aim to extract a series of weighted composites or components of observed variables for the purpose of data reduction. However, their objectives of performing data reduction are different. Multiple‐set canonical correlation analysis focuses on describing the association among several sets of variables through data reduction, whereas principal components analysis concentrates on explaining the maximum variance of a single set of variables. In this paper, we provide a unified framework that combines these seemingly incompatible techniques. The proposed approach embraces the two techniques as special cases. More importantly, it permits a compromise between the techniques in yielding solutions. For instance, we may obtain components in such a way that they maximize the association among multiple data sets, while also accounting for the variance of each data set. We develop a single optimization function for parameter estimation, which is a weighted sum of two criteria for multiple‐set canonical correlation analysis and principal components analysis. We minimize this function analytically. We conduct simulation studies to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of functional neuroimaging data to illustrate its empirical usefulness.  相似文献   

10.
The purpose of this article is to reduce potential statistical barriers and open doors to canonical correlation analysis (CCA) for applied behavioral scientists and personality researchers. CCA was selected for discussion, as it represents the highest level of the general linear model (GLM) and can be rather easily conceptualized as a method closely linked with the more widely understood Pearson r correlation coefficient. An understanding of CCA can lead to a more global appreciation of other univariate and multivariate methods in the GLM. We attempt to demonstrate CCA with basic language, using technical terminology only when necessary for understanding and use of the method. We present an entire example of a CCA analysis using SPSS (Version 11.0) with personality data.  相似文献   

11.
Blockmodel approaches to network analysis as developed by Harrison White are shown to fall in a broader class of established data analysis methods based on matrix permutations (e.g., clique detection, seriation, permutation algorithms for sparse matrices). Blockmodels are seen as an important generalization of these earlier methods since they permit the data to characterize their own structure, instead of seeking to manifest some preconceived structure which is imposed by the investigator (e.g., cliques, hierarchies, or structural balance). General algorithms for the inductive construction of blockmodels thus occupy a central position in the development of the area. We discuss theoretical and practical aspects of the blockmodel search procedure which has been most widely used (CONCOR algorithm). It is proposed that the distinctive and advantageous feature of CONCOR is that it solves what is initially presented as a combinatorial problem (permutations of matrices to reveal zeroblocks) by representing the problem as a continuous one (analysis of correlation matrices). When this representation strategy receives further development, it is predicted that the fairly crude empirical approach of CONCOR will be supplanted by more powerful procedures within this same class.  相似文献   

12.
Correspondence analysis (CA) is a popular method that can be used to analyse relationships between categorical variables. It is closely related to several popular multivariate analysis methods such as canonical correlation analysis and principal component analysis. Like principal component analysis, CA solutions can be rotated orthogonally as well as obliquely into a simple structure without affecting the total amount of explained inertia. However, some specific aspects of CA prevent standard rotation procedures from being applied in a straightforward fashion. In particular, the role played by weights assigned to points and dimensions and the duality of CA solutions are unique to CA. For orthogonal simple structure rotation, procedures recently have been proposed. In this paper, we construct oblique rotation methods for CA that take into account these specific difficulties. We illustrate the benefits of our oblique rotation procedure by means of two illustrative examples.  相似文献   

13.
It is the purpose of this paper to present a method of analysis for obtaining (i) inter-battery factors and (ii) battery specific factors for two sets of tests when the complete correlation matrix including communalities is given. In particular, the procedure amounts to constructing an orthogonal transformation such that its application to an orthogonal factor solution of the combined sets of tests results in a factor matrix of a certain desired form. The factors isolated are orthogonal but may be subjected to any suitable final rotation, provided the above classification of factors into (i) and (ii) is preserved. The general coordinate-free solution of the problem is obtained with the help of methods pertaining to the theory of linear spaces. The actual numerical analysis determined by the coordinate-free solution turns out to be a generalization of the formalism of canonical correlation analysis for two sets of variables. A numerical example is provided.This investigation has been supported by the U.S. Office of Naval Research under Contract Nonr-2752(00).  相似文献   

14.
An examination of the determinantal equation associated with Rao's canonical factors suggests that Guttman's best lower bound for the number of common factors corresponds to the number of positive canonical correlations when squared multiple correlations are used as the initial estimates of communality. When these initial communality estimates are used, solving Rao's determinantal equation (at the first stage) permits expressing several matrices as functions of factors that differ only in the scale of their columns; these matrices include the correlation matrix with units in the diagonal, the correlation matrix with squared multiple correlations as communality estimates, Guttman's image covariance matrix, and Guttman's anti-image covariance matrix. Further, the factor scores associated with these factors can be shown to be either identical or simply related by a scale change. Implications for practice are discussed, and a computing scheme which would lead to an exhaustive analysis of the data with several optional outputs is outlined.  相似文献   

15.
A nonparametric analogue of canonical correlation analysis, called P-CFA, was used to determine how the Eysenck Personality Inventory is related to Cattell's Sixteen Personality Factors Questionnaire. The P-CFA methods used showed that Eysenck's personality dimensions of Extraversion and Neuroticism can be predicted quite well from suitably chosen primary scales in the Cattell questionnaire. The results were consistent with those obtained in another study using canonical analysis. Beyond that, P-CFA methods were used to predict Eysenckian trait-type categories from Cattell's primary scales. The results were weak and not predictable from canonical analysis, but showed potential analytic capabilities of P-CFA that are not available in canonical analysis.  相似文献   

16.
Redundancy analysis an alternative for canonical correlation analysis   总被引:12,自引:0,他引:12  
A component method is presented maximizing Stewart and Love's redundancy index. Relationships with multiple correlation and principal component analysis are pointed out and a rotational procedure for obtaining bi-orthogonal variates is given. An elaborate example comparing canonical correlation analysis and redundancy analysis on artificial data is presented.A Fortran IV program for the method of redundancy analysis described in this paper can be obtained from the author upon request.  相似文献   

17.
A computer-assisted, K-fold crossvalidation technique is discussed within the framework of canonical correlation analysis of randomly generated data sets. Results of the analysis suggest that this technique of multi-crossvalidation can be an effective method to reduce the contamination of canonical variates and canonical correlations by sample-specific variance components.  相似文献   

18.
Although much progress has been made in clarifying the properties of canonical correlation analysis in order to enhance its applicability, there are several remaining problems. Canonical variates do not always represent the observed variables even though the canonical correlation is high. In addition, canonical solutions are often difficult to interpret.

This paper presents a method designed to deal with these two problems. Instead of maximizing the correlation between unobserved variates, the sum of squared inter-set loadings is maximized. Contrary to the canonical correlation solution, this method ensures that the shared variance between predictor variates and criterion variables is maximal. Instead of extracting variates from both criterion and predictor variables, only one set of components (from the predictor variables) is constructed. Without loss of common variance, an orthogonal rotation is applied to the resulting loadings in order to simplify structure.  相似文献   

19.
A method is presented for generalized canonical correlation analysis of two or more matrices with missing rows. The method is a combination of Carroll’s (1968) method and the missing data approach of the OVERALS technique (Van der Burg, 1988). In a simulation study we assess the performance of the method and compare it to an existing procedure called GENCOM, proposed by Green and Carroll (1988). We find that the proposed method outperforms the GENCOM algorithm both with respect to model fit and recovery of the true structure. The research of Michel van de Velden was partly funded through EU Grant HPMF-CT-2000-00664. The authors would like to thank the associate editor and three anonymous referees for their constructive comments and suggestions that led to a considerable improvement of the paper.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号