首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the intriguing questions of factor analysis is the extent to which one can reduce the rank of a symmetric matrix by only changing its diagonal entries. We show in this paper that the set of matrices, which can be reduced to rankr, has positive (Lebesgue) measure if and only ifr is greater or equal to the Ledermann bound. In other words the Ledermann bound is shown to bealmost surely the greatest lower bound to a reduced rank of the sample covariance matrix. Afterwards an asymptotic sampling theory of so-called minimum trace factor analysis (MTFA) is proposed. The theory is based on continuous and differential properties of functions involved in the MTFA. Convex analysis techniques are utilized to obtain conditions for differentiability of these functions.  相似文献   

2.
In the last decade several authors discussed the so-called minimum trace factor analysis (MTFA), which provides the greatest lower bound (g.l.b.) to reliability. However, the MTFA fails to be scale free. In this paper we propose to solve the scale problem by maximization of the g.l.b. as the function of weights. Closely related to the primal problem of the g.l.b. maximization is the dual problem. We investigate the primal and dual problems utilizing convex analysis techniques. The asymptotic distribution of the maximal g.l.b. is obtained provided the population covariance matrix satisfies sone uniqueness and regularity assumptions. Finally we outline computational algorithms and consider numerical examples.I wish to express my gratitude to Dr. A. Melkman for the idea of theorem 3.3.  相似文献   

3.
A concept of approximate minimum rank for a covariance matrix is defined, which contains the (exact) minimum rank as a special case. A computational procedure to evaluate the approximate minimum rank is offered. The procedure yields those proper communalities for which the unexplained common variance, ignored in low-rank factor analysis, is minimized. The procedure also permits a numerical determination of the exact minimum rank of a covariance matrix, within limits of computational accuracy. A set of 180 covariance matrices with known or bounded minimum rank was analyzed. The procedure was successful throughout in recovering the desired rank.The authors are obliged to Paul Bekker for stimulating and helpful comments.  相似文献   

4.
This paper discusses the advantages and problems related to factor analysis by minimizing residuals (minres). It is shown that this method fails if the starting point of iterations is not well chosen. A suitable starting point is suggested.  相似文献   

5.
The minimum‐diameter partitioning problem (MDPP) seeks to produce compact clusters, as measured by an overall goodness‐of‐fit measure known as the partition diameter, which represents the maximum dissimilarity between any two objects placed in the same cluster. Complete‐linkage hierarchical clustering is perhaps the best‐known heuristic method for the MDPP and has an extensive history of applications in psychological research. Unfortunately, this method has several inherent shortcomings that impede the model selection process, such as: (1) sensitivity to the input order of the objects, (2) failure to obtain a globally optimal minimum‐diameter partition when cutting the tree at K clusters, and (3) the propensity for a large number of alternative minimum‐diameter partitions for a given K. We propose that each of these problems can be addressed by applying an algorithm that finds all of the minimum‐diameter partitions for different values of K. Model selection is then facilitated by considering, for each value of K, the reduction in the partition diameter, the number of alternative optima, and the partition agreement among the alternative optima. Using five examples from the empirical literature, we show the practical value of the proposed process for facilitating model selection for the MDPP.  相似文献   

6.
A common problem for both principal component analysis and image component analysis is determining how many components to retain. A number of solutions have been proposed, none of which is totally satisfactory. An alternative solution which employs a matrix of partial correlations is considered. No components are extracted after the average squared partial correlation reaches a minimum. This approach gives an exact stopping point, has a direct operational interpretation, and can be applied to any type of component analysis. The method is most appropriate when component analysis is employed as an alternative to, or a first-stage solution for, factor analysis.  相似文献   

7.
Some relations between maximum likelihood factor analysis and factor indeterminacy are discussed. Bounds are derived for the minimum average correlation between equivalent sets of correlated factors which depend on the latent roots of the factor intercorrelation matrix . Empirical examples are presented to illustrate some of the theory and indicate the extent to which it can be expected to be relevant in practice.  相似文献   

8.
The Tower of London (TOL) task has been widely used in both clinical and research realms. In the current study, 104 healthy participants attempted all possible moderate- to high-difficulty TOL problems in order to determine: (1) optimal measures of problem solving performance, (2) problem characteristics, other than the minimum moves necessary to solve the problem, that determine participants’ difficulty in solving problems successfully, quickly, and efficiently, and (3) effects of increased task experience on which problem characteristics determine problem difficulty. A factor analysis of six performance measures found that, regardless of task experience, problem difficulty could be captured well either by a single factor corresponding to general quality of solution or possibly by three subordinate factors corresponding to solution efficiency, solution speed, and initial planning speed. Regression analyses predicting these performance factors revealed that in addition to a problem’s minimum moves three problem parameters were critical in determining the problem difficulty: goal position hierarchy, start position hierarchy, and number of solution paths available. The relative contributions of each of the characteristics strongly depended on which performance factor defined performance. We conclude that TOL problem performance is multifaceted, and that classifying problem difficulty using only the minimum moves necessary to solve the problem is inadequate.  相似文献   

9.
FACTOR: A computer program to fit the exploratory factor analysis model   总被引:1,自引:0,他引:1  
Exploratory factor analysis (EFA) is one of the most widely used statistical procedures in psychological research. It is a classic technique, but statistical research into EFA is still quite active, and various new developments and methods have been presented in recent years. The authors of the most popular statistical packages, however, do not seem very interested in incorporating these new advances. We present the program FACTOR, which was designed as a general, user-friendly program for computing EFA. It implements traditional procedures and indices and incorporates the benefits of some more recent developments. Two of the traditional procedures implemented are polychoric correlations and parallel analysis, the latter of which is considered to be one of the best methods for determining the number of factors or components to be retained. Good examples of the most recent developments implemented in our program are (1) minimum rank factor analysis, which is the only factor method that allows one to compute the proportion of variance explained by each factor, and (2) the simplimax rotation method, which has proved to be the most powerful rotation method available. Of these methods, only polychoric correlations are available in some commercial programs. A copy of the software, a demo, and a short manual can be obtained free of charge from the first author.  相似文献   

10.
This paper presents a new method of determining the minimum rank in factor analysis, appropriate to the principal axes solution. The new method is compared with a former method which, with some adjustment, is more convenient for the centroid approach. Both methods are applied to two familiar examples.  相似文献   

11.
探索性因素分析决定因子抽取的方法主要有Bartlett法、K1原则、碎石检验法、Aaker原则、PA、MAP等六种,通过对样本1的395名大学生的学习过程问卷调查获得真实数据,运用这六种方法进行因素分析分别抽取7、4、2、4、3、2个因子;应用样本2的383名大学生的问卷调查数据进行验证性因素分析,结果显示,碎石检验方法与MAP方法抽取的二因素模型更理想。研究表明,因子抽取需兼顾“简约性原则”与“完备性原则”,同时要根据一定的理论建构、专业知识和经验来决定因子数。  相似文献   

12.
Four adult humans chose repeatedly between a fixed-time schedule (of points later exchangeable for money) and a progressive-time schedule that began at 0 s and increased by a fixed number of seconds with each point delivered by that schedule. Each point delivered by the fixed-time schedule reset the requirements of the progressive-time schedule to its minimum value. Subjects were provided with instructions that specified a particular sequence of choices. Under the initial conditions, the instructions accurately specified the optimal choice sequence. Thus, control by instructions and optimal control by the programmed contingencies both supported the same performance. To distinguish the effects of instructions from schedule sensitivity, the correspondence between the instructed and optimal choice patterns was gradually altered across conditions by varying the step size of the progressive-time schedule while maintaining the same instructions. Step size was manipulated, typically in 1-s units, first in an ascending and then in a descending sequence of conditions. Instructions quickly established control in all 4 subjects but, by narrowing the range of choice patterns, they reduced subsequent sensitivity to schedule changes. Instructional control was maintained across the ascending sequence of progressive-time values for each subject, but eventually diminished, giving way to more schedule-appropriate patterns. The transition from instruction-appropriate to schedule-appropriate behavior was characterized by an increase in the variability of choice patterns and local increases in point density. On the descending sequence of progressive-time values, behavior appeared to be schedule sensitive, sometimes even optimally sensitive, but it did not always change systematically with the contingencies, suggesting the involvement of other factors.  相似文献   

13.
An alternative interpretation is offered of some factor analytic studies of ratings of aesthetic stimuli. Earlier interpretation has been in terms of two orthogonal factors, Evaluation and Activity. The re-interpretation is based on the circular fan-like structure of the factor plot, reflected in a high degree of order in the correlation matrix (the curvex structure), the semantically meaningful progression of the bi-polar adjective scales around the circle, and the fact that the stimulus objects in some cases are located along an ordered continuum in the factor space. Coombs and Kao's application of unfolding concepts to factor analysis of preferential choice data and Guttman's concept of order factor analysis is applied. It is proposed that the ratings reflect one underlying stimulus dimension, objective complexity, and that the rating scales represent ideal points along this dimension. The scales Pleasing, Interesting and Complex are hypothesized to reach their ideal point at increasingly higher levels of objective complexity. Ten of the 13 factor analyses studied were in agreement with the hypothesis.  相似文献   

14.
Interpersonally incomparable responses pose a significant problem for survey researchers. If the manifest responses of individuals differ from their underlying true responses by monotonic transformations which vary from person to person, then the covariances of the manifest responses tools such as factor analysis may yield incorrect results. Satisfactory results for interpersonally incomparable ordinal responses can be obtained by assuming that rankings are based upon a set of multivariate normal latent variables which satisfy the factor or ideal point models of choice. Two statistical methods based upon these assumptions are described; their statistical properties are explored; and their computational feasibility is demonstrated in some simulations. We conclude that is possible to develop methods for factor and ideal point analysis of interpersonally incomparable ordinal data.This research was begun in the supportive enviroment of the Survey Research Center at the University of California, Berkeley. Financial support was provided by Percy Tannenbaum, Director of the Center, by Allan Sindler, Dean of the Graduate School of Public Policy at Berkeley, by the Data Center of Harvard University, and by the National Science Foundation through grant number SES-84-03056. Chris Achen, Doug Rivers, and members of the Harvard-MIT econometrics seminar provided useful comments.  相似文献   

15.
We study some operations that may be defined using the minimum operator in the context of a Heyting algebra. Our motivation comes from the fact that 1) already known compatible operations, such as the successor by Kuznetsov, the minimum dense by Smetanich and the operation G by Gabbay may be defined in this way, though almost never explicitly noted in the literature; 2) defining operations in this way is equivalent, from a logical point of view, to two clauses, one corresponding to an introduction rule and the other to an elimination rule, thus providing a manageable way to deal with these operations. Our main result is negative: all operations that arise turn out to be Heyting terms or the mentioned already known operations or operations interdefinable with them. However, it should be noted that some of the operations that arise may exist even if the known operations do not. We also study the extension of Priestley duality to Heyting algebras enriched with the new operations.  相似文献   

16.
元回归模型被广泛应用于调节变量的识别。从元分析技术的原理谈起, 介绍了元回归模型, 然后采用蒙特卡洛模拟, 基于统计功效和估计精度探究效应量个数对元回归模型参数估计的影响, 从而确立效应量的最小个数需求。主要研究结果为:(1) Wald-type z检验方法在元回归中易犯I类错误; (2)为达到参数估计要求, 元回归至少需要20个效应量; (3)纳入合适的调节变量能降低对效应量的个数需求。基于研究结果, 提出以下建议:(1)研究者应慎重使用Wald-type z检验方法和CMA软件; (2)研究者至少需要20个效应量, 且应当根据实际情况进一步增加效应量个数; (3)研究者应当积极探索合适的调节变量; (4)未来审稿人可参考最小效应量个数需求对元回归研究进行质量评估。  相似文献   

17.
Gómez CX  Carvajal CC 《Psicothema》2012,24(2):302-309
This paper introduces a summary on how to proceed to conduct a factor analysis when the input data are ipsative. The classical factor analysis procedures cannot be used because the covariance matrix is singular. Additionally, previous research on the optimal conditions to conduct factor analysis for ipsatized data is reviewed, and the results of a simulation study are presented. The study includes conditions of sample size, model complexity, and model specification (correct vs. incorrect). The results suggest that researchers should be careful when factor analyzing ipsatized data, particularly if they suspect that the model is incorrectly specified and includes a smaller number of factors.  相似文献   

18.
For assigning subjects to treatments the point of intersection of within-group regression lines is ordinarily used as the critical point. This decision rule is critized and, for several utility functions and any number of treatments, replaced by optimal monotone, nonrandomized (Bayes) rules. Both treatments with and without mastery scores are considered. Moreover, the effect of unreliable criterion scores on the optimal decision rule is examined, and it is illustrated how qualitative information can be combined with aptitude measurements to improve treatment assignment decisions. Although the models in this paper are presented with special reference to the aptitude-treatment interaction problem in education, it is indicated that they apply to a variety of situations in which subjects are assigned to treatments on the basis of some predictor score, as long as there are no allocation quota considerations.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号