共查询到20条相似文献,搜索用时 0 毫秒
1.
The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal variables and that it can handle and discover nonlinear relationships between variables. Also, nonlinear PCA can deal with variables at their appropriate measurement level; for example, it can treat Likert-type scales ordinally instead of numerically. Every observed value of a variable can be referred to as a category. While performing PCA, nonlinear PCA converts every category to a numeric value, in accordance with the variable's analysis level, using optimal quantification. The authors discuss how optimal quantification is carried out, what analysis levels are, which decisions have to be made when applying nonlinear PCA, and how the results can be interpreted. The strengths and limitations of the method are discussed. An example applying nonlinear PCA to empirical data using the program CATPCA (J. J. Meulman, W. J. Heiser, & SPSS, 2004) is provided. 相似文献
2.
3.
Principal components analysis (PCA) of face images is here related to subjects’ performance on the same images. In two experiments subjects were shown a set of faces and asked to rate them for distinctiveness. They were subsequently shown a superset of faces and asked to identify those that had appeared originally. Replicating previous work, we found that hits and false positives (FPs) did not correlate: Those faces easy to identify as being “seen” were unrelated to those faces easy to reject as being “unseen.” PCA was performed on three data sets: (1) face images with eye position standardized, (2) face images morphed to a standard template to remove shape information, and (3) the shape information from faces only. Analyses based on PCA of shape-free faces gave high predictions of FPs, whereas shape information itself contributed only to hits. Furthermore, whereas FPs were generally predictable from components early in the PCA, hits appeared to be accounted for by later components. We conclude that shape and “texture” (the image-based information remaining after morphing) may be used separately by the human face processing system, and that PCA of images offers a useful tool for understanding this system. 相似文献
4.
In many human movement studies angle-time series data on several groups of individuals are measured. Current methods to compare groups include comparisons of the mean value in each group or use multivariate techniques such as principal components analysis and perform tests on the principal component scores. Such methods have been useful, though discard a large amount of information. Functional data analysis (FDA) is an emerging statistical analysis technique in human movement research which treats the angle-time series data as a function rather than a series of discrete measurements. This approach retains all of the information in the data. Functional principal components analysis (FPCA) is an extension of multivariate principal components analysis which examines the variability of a sample of curves and has been used to examine differences in movement patterns of several groups of individuals. Currently the functional principal components (FPCs) for each group are either determined separately (yielding components that are group-specific), or by combining the data for all groups and determining the FPCs of the combined data (yielding components that summarize the entire data set). The group-specific FPCs contain both within and between group variation and issues arise when comparing FPCs across groups when the order of the FPCs alter in each group. The FPCs of the combined data may not adequately describe all groups of individuals and comparisons between groups typically use t-tests of the mean FPC scores in each group. When these differences are statistically non-significant it can be difficult to determine how a particular intervention is affecting movement patterns or how injured subjects differ from controls. In this paper we aim to perform FPCA in a manner allowing sensible comparisons between groups of curves. A statistical technique called common functional principal components analysis (CFPCA) is implemented. CFPCA identifies the common sources of variation evident across groups but allows the order of each component to change for a particular group. This allows for the direct comparison of components across groups. We use our method to analyze a biomechanical data set examining the mechanisms of chronic Achilles tendon injury and the functional effects of orthoses. 相似文献
5.
Devin M. Burns Joseph W. Houpt James T. Townsend Michael J. Endres 《Behavior research methods》2013,45(4):1048-1057
Workload capacity, an important concept in many areas of psychology, describes processing efficiency across changes in workload. The capacity coefficient is a function across time that provides a useful measure of this construct. Until now, most analyses of the capacity coefficient have focused on the magnitude of this function, and often only in terms of a qualitative comparison (greater than or less than one). This work explains how a functional extension of principal components analysis can capture the time-extended information of these functional data, using a small number of scalar values chosen to emphasize the variance between participants and conditions. This approach provides many possibilities for a more fine-grained study of differences in workload capacity across tasks and individuals. 相似文献
6.
Lester C. Shine II 《Psychometrika》1972,37(1):99-101
It is shown that McDonald's generalization of classical Principal Components Analysis to groups of variables maximally channels the total variance of the original variables through the groups of variables acting as groups. A useful equation is obtained for determining the vectors of correlations of theL2 components with the original variables. A calculation example is given. 相似文献
7.
Motes MA Hubbard TL Courtney JR Rypma B 《Journal of experimental psychology. Learning, memory, and cognition》2008,34(5):1076-1083
Research has shown that spatial memory for moving targets is often biased in the direction of implied momentum and implied gravity, suggesting that representations of the subjective experiences of these physical principles contribute to such biases. The present study examined the association between these spatial memory biases. Observers viewed targets that moved horizontally from left to right before disappearing or viewed briefly shown stationary targets. After a target disappeared, observers indicated the vanishing position of the target. Principal components analysis revealed that biases along the horizontal axis of motion loaded on separate components from biases along the vertical axis orthogonal to motion. The findings support the hypothesis that implied momentum and implied gravity biases have unique influences on spatial memory. 相似文献
8.
This paper presents a nontechnical, conceptually oriented introduction to wavelet analysis and its application to neuroelectric waveforms such as the EEG and event related potentials (ERP). Wavelet analysis refers to a growing class of signal processing techniques and transforms that use wavelets and wavelet packets to decompose and manipulate time-varying, nonstationary signals. Neuroelectric waveforms fall into this category of signals because they typically have frequency content that varies as a function of time and recording site. Wavelet techniques can optimize the analysis of such signals by providing excellent joint time-frequency resolution. The ability of wavelet analysis to accurately resolve neuroelectric waveforms into specific time and frequency components leads to several analysis applications. Some of these applications are time-varying filtering for denoising single trial ERPs, EEG spike and spindle detection, ERP component separation and measurement, hearing-threshold estimation via auditory brainstem evoked response measurements, isolation of specific EEG and ERP rhythms, scale-specific topographic analysis, and dense-sensor array data compression. The present tutorial describes the basic concepts of wavelet analysis that underlie these and other applications. In addition, the application of a recently developed method of custom designing Meyer wavelets to match the waveshapes of particular neuroelectric waveforms is illustrated. Matched wavelets are physiologically sensible pattern analyzers for EEG and ERP waveforms and their superior performance is illustrated with real data examples. 相似文献
9.
10.
We examined the structure of 9 Rorschach variables related to hostility and aggression (Aggressive Movement, Morbid, Primary Process Aggression, Secondary Process Aggression, Aggressive Content, Aggressive Past, Strong Hostility, Lesser Hostility) in a sample of medical students (N= 225) from the Johns Hopkins Precursors Study (The Johns Hopkins University, 1999). Principal components analysis revealed 2 dimensions accounting for 58% of the total variance. These dimensions extended previous findings for a 2-component model of Rorschach aggressive imagery that had been identified using just 5 or 6 marker variables (Baity & Hilsenroth, 1999; Liebman, Porcerelli, & Abell, 2005). In light of this evidence, we draw an empirical link between the historical research literature and current studies of Rorschach aggression and hostility that helps organize their findings. We also offer suggestions for condensing the array of aggression-related measures to simplify Rorschach aggression scoring. 相似文献
11.
Wilcox RR 《Behavior research methods》2008,40(1):102-108
This article compares several methods for performing robust principal component analysis, two of which have not been considered in previous articles. The criterion here, unlike that of extant articles aimed at comparing methods, is how well a method maximizes a robust version of the generalized variance of the projected data. This is in contrast to maximizing some measure of scatter associated with the marginal distributions of the projected scores, which does not take into account the overall structure of the projected data. Included are comparisons in which distributions are not elliptically symmetric. One of the new methods simply removes outliers using a projection-type multivariate outlier detection method that has been found to perform well relative to other outlier detection methods that have been proposed. The other new method belongs to the class of projection pursuit techniques and differs from other projection pursuit methods in terms of the function it tries to maximize. The comparisons include the method derived by Maronna (2005), the spherical method derived by Locantore et al. (1999), as well as a method proposed by Hubert, Rousseeuw, and Vanden Branden (2005). From the perspective used, the method by Hubert et al. (2005), the spherical method, and one of the new methods dominate the method derived by Maronna. 相似文献
12.
Stability of nonlinear principal components analysis: an empirical study using the balanced bootstrap 总被引:1,自引:0,他引:1
Principal components analysis (PCA) is used to explore the structure of data sets containing linearly related numeric variables. Alternatively, nonlinear PCA can handle possibly nonlinearly related numeric as well as nonnumeric variables. For linear PCA, the stability of its solution can be established under the assumption of multivariate normality. For nonlinear PCA, however, standard options for establishing stability are not provided. The authors use the nonparametric bootstrap procedure to assess the stability of nonlinear PCA results, applied to empirical data. They use confidence intervals for the variable transformations and confidence ellipses for the eigenvalues, the component loadings, and the person scores. They discuss the balanced version of the bootstrap, bias estimation, and Procrustes rotation. To provide a benchmark, the same bootstrap procedure is applied to linear PCA on the same data. On the basis of the results, the authors advise using at least 1,000 bootstrap samples, using Procrustes rotation on the bootstrap results, examining the bootstrap distributions along with the confidence regions, and merging categories with small marginal frequencies to reduce the variance of the bootstrap results. 相似文献
13.
14.
15.
In this study, participants rated previously unseen faces on six dimensions: familiarity, distinctiveness, attractiveness,
memorability, typicality, and resemblance to a familiar person. The faces were then presented again in a recognition test
in which participants assigned their positive recognition decisions to either remember (R), know (K), or guess categories.
On all dimensions except typicality, faces that were categorized as R responses were associated with significantly higher
ratings than were faces categorized as K responses. Study ratings for R and K responses were then subjected to a principal
components analysis. The factor loadings suggested that R responses were influenced primarily by the distinctiveness of faces,
but K responses were influenced by moderate ratings on all six dimensions. These findings indicate that the structural features
of a face influence the subjective experience of recognition. 相似文献
16.
17.
Heungsun Hwang Kwanghee Jung Yoshio Takane Todd S. Woodward 《The British journal of mathematical and statistical psychology》2013,66(2):308-321
Multiple‐set canonical correlation analysis and principal components analysis are popular data reduction techniques in various fields, including psychology. Both techniques aim to extract a series of weighted composites or components of observed variables for the purpose of data reduction. However, their objectives of performing data reduction are different. Multiple‐set canonical correlation analysis focuses on describing the association among several sets of variables through data reduction, whereas principal components analysis concentrates on explaining the maximum variance of a single set of variables. In this paper, we provide a unified framework that combines these seemingly incompatible techniques. The proposed approach embraces the two techniques as special cases. More importantly, it permits a compromise between the techniques in yielding solutions. For instance, we may obtain components in such a way that they maximize the association among multiple data sets, while also accounting for the variance of each data set. We develop a single optimization function for parameter estimation, which is a weighted sum of two criteria for multiple‐set canonical correlation analysis and principal components analysis. We minimize this function analytically. We conduct simulation studies to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of functional neuroimaging data to illustrate its empirical usefulness. 相似文献
18.
We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the “face space” produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment. 相似文献
19.
20.
Frederic M. Lord 《Psychometrika》1958,23(4):291-296
Guttman's principal components for the weighting system are the item scoring weights that maximize the generalized Kuder-Richardson reliability coefficient. The principal component for any item is effectively the same as the factor loading of the item divided by the item standard deviation, the factor loadings being obtained from an ordinary factor analysis of the item intercorrelation matrix. 相似文献