首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
A method is developed to investigate the additive structure of data that (a) may be measured at the nominal, ordinal or cardinal levels, (b) may be obtained from either a discrete or continuous source, (c) may have known degrees of imprecision, or (d) may be obtained in unbalanced designs. The method also permits experimental variables to be measured at the ordinal level. It is shown that the method is convergent, and includes several previously proposed methods as special cases. Both Monte Carlo and empirical evaluations indicate that the method is robust.This research was supported in part by grant MH-10006 from the National Institute of Mental Health to the Psychometric Laboratory of the University of North Carolina. We wish to thank Thomas S. Wallsten for comments on an earlier draft of this paper. Copies of the paper and of ADDALS, a program to perform the analyses discussed herein, may be obtained from the second author.  相似文献   

2.
A new procedure is discussed which fits either the weighted or simple Euclidian model to data that may (a) be defined at either the nominal, ordinal, interval or ratio levels of measurement; (b) have missing observations; (c) be symmetric or asymmetric; (d) be conditional or unconditional; (e) be replicated or unreplicated; and (f) be continuous or discrete. Various special cases of the procedure include the most commonly used individual differences multidimensional scaling models, the familiar nonmetric multidimensional scaling model, and several other previously undiscussed variants.The procedure optimizes the fit of the model directly to the data (not to scalar products determined from the data) by an alternating least squares procedure which is convergent, very quick, and relatively free from local minimum problems.The procedure is evaluated via both Monte Carlo and empirical data. It is found to be robust in the face of measurement error, capable of recovering the true underlying configuration in the Monte Carlo situation, and capable of obtaining structures equivalent to those obtained by other less general procedures in the empirical situation.This project was supported in part by Research Grant No. MH10006 and Research Grant No. MH26504, awarded by the National Institute of Mental Health, DHEW. We wish to thank Robert F. Baker, J. Douglas Carroll, Joseph Kruskal, and Amnon Rapoport for comments on an earlier draft of this paper. Portions of the research reported here were presented to the spring meeting of the Psychometric Society, 1975. ALSCAL, a program to perform the computations discussed in this paper, may be obtained from any of the authors.Jan de Leeuw is currently at Datatheorie, Central Rekeninstituut, Wassenaarseweg 80, Leiden, The Netherlands. Yoshio Takane can be reached at the Department of Psychology, University of Tokyo, Tokyo, Japan.  相似文献   

3.
4.
Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple or single. The single transformations consist of three types: nominal, ordinal, and numerical. The corresponding OVERALS computer program minimizes a least squares loss function by using an alternating least squares algorithm. Many existing linear and nonlinear multivariate analysis techniques are shown to be special cases of OVERALS. An application to data from an epidemiological survey is presented.This research was partly supported by SWOV (Institute for Road Safety Research) in Leidschendam, The Netherlands.  相似文献   

5.
A review of the existing techniques for the analysis of three-way data revealed that none were appropriate to the wide variety of data usually encountered in psychological research, and few were capable of both isolating common information and systematically describing individual differences. An alternating least squares algorithm was proposed to fit both an individual difference model and a replications component model to three-way data which may be defined at the nominal, ordinal, interval, ratio, or mixed measurement level; which may be discrete or continuous; and which may be unconditional, matrix conditional, or row conditional. This algorithm was evaluated by a Monte Carlo study. Recovery of the original information was excellent when the correct measurement characteristics were assumed. Furthermore, the algorithm was robust to the presence of random error. In addition, the algorithm was used to fit the individual difference model to a real, binary, subject conditional data set. The findings from this application were consistent with previous research in the area of implicit personality theory and uncovered interesting systematic individual differences in the perception of political figures and roles.This paper is part of a Thesis performed by Richard Sands under the direction of Forrest Young at the L. L. Thurstone Psychometric Laboratory, University of North Carolina at Chapel Hill. Thanks are extended to Drs. Charles Schmidt and Andrea Sedlak for the use of their political role data set.  相似文献   

6.
An individual differences additive model is discussed which represents individual differences in additivity by differential weighting of additive factors. A procedure for estimating the model parameters for various data measurement characteristics is developed. The procedure is evaluated using both Monte Carlo and real data. The method is found to be very useful in describing certain types of developmental change in cognitive structure, as well as being numerically robust and efficient.The work reported here was partly supported by Grant A6394 to the first author by the Natural Sciences and Engineering Research Council of Canada.  相似文献   

7.
This paper develops a method of optimal scaling for multivariate ordinal data, in the framework of a generalized principal component analysis. This method yields a multidimensional configuration of items, a unidimensional scale of category weights for each item and, optionally, a multidimensional configuration of subjects. The computation is performed by alternately solving an eigenvalue problem and executing a quasi-Newton projection method. The algorithm is extended for analysis of data with mixed measurement levels or for analysis with a combined weighting of items. Numerical examples and simulations are provided. The algorithm is discussed and compared with some related methods.Earlier results of this research appeared in Saito and Otsu (1983). The authors would like to acknowledge the helpful comments and encouragement of the editor.  相似文献   

8.
Kroonenberg and de Leeuw (1980) have developed an alternating least-squares method TUCKALS-3 as a solution for Tucker's three-way principal components model. The present paper offers some additional features of their method. Starting from a reanalysis of Tucker's problem in terms of a rank-constrained regression problem, it is shown that the fitted sum of squares in TUCKALS-3 can be partitioned according to elements of each mode of the three-way data matrix. An upper bound to the total fitted sum of squares is derived. Finally, a special case of TUCKALS-3 is related to the Carroll/Harshman CANDECOMP/PARAFAC model.  相似文献   

9.
Multidimensional scaling has recently been enhanced so that data defined at only the nominal level of measurement can be analyzed. The efficacy of ALSCAL, an individual differences multidimensional scaling program which can analyze data defined at the nominal, ordinal, interval and ratio levels of measurement, is the subject of this paper. A Monte Carlo study is presented which indicates that (a) if we know the correct level of measurement then ALSCAL can be used to recover the metric information presumed to underlie the data; and that (b) if we do not know the correct level of measurement then ALSCAL can be used to determine the correct level and to recover the underlying metric structure. This study also indicates, however, that with nominal data ALSCAL is quite likely to obtain solutions which are not globally optimal, and that in these cases the recovery of metric structure is quite poor. A second study is presented which isolates the potential cause of these problems and forms the basis for a suggested modification of the ALSCAL algorithm which should reduce the frequency of locally optimal solutions.  相似文献   

10.
A new method to estimate the parameters of Tucker's three-mode principal component model is discussed, and the convergence properties of the alternating least squares algorithm to solve the estimation problem are considered. A special case of the general Tucker model, in which the principal component analysis is only performed over two of the three modes is briefly outlined as well. The Miller & Nicely data on the confusion of English consonants are used to illustrate the programs TUCKALS3 and TUCKALS2 which incorporate the algorithms for the two models described.  相似文献   

11.
An important feature of distance-based principal components analysis, is that the variables can be optimally transformed. For monotone spline transformation, a nonnegative least-squares problem with a length constraint has to be solved in each iteration. As an alternative algorithm to Lawson and Hanson (1974), we propose the Alternating Length-Constrained Non-Negative Least-Squares (ALC-NNLS) algorithm, which minimizes the nonnegative least-squares loss function over the parameters under a length constraint, by alternatingly minimizing over one parameter while keeping the others fixed. Several properties of the new algorithm are discussed. A Monte Carlo study is presented which shows that for most cases in distance-based principal components analysis, ALC-NNLS performs as good as the method of Lawson and Hanson or sometimes even better in terms of the quality of the solution. Supported by The Netherlands Organization for Scientific Research (NWO) by grant nr. 030-56403 for the “PIONEER” project “Subject Oriented Multivariate Analysis” to the third author. We would like to thank the anonymous referees for their valuable remarks that have improved the quality of this paper.  相似文献   

12.
The DEDICOM model is a model for representing asymmetric relations among a set of objects by means of a set of coordinates for the objects on a limited number of dimensions. The present paper offers an alternating least squares algorithm for fitting the DEDICOM model. The model can be generalized to represent any number of sets of relations among the same set of objects. An algorithm for fitting this three-way DEDICOM model is provided as well. Based on the algorithm for the three-way DEDICOM model an algorithm is developed for fitting the IDIOSCAL model in the least squares sense.The author is obliged to Jos ten Berge and Richard Harshman.  相似文献   

13.
The paper derives sufficient conditions for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis.  相似文献   

14.
We discuss a variety of methods for quantifying categorical multivariate data. These methods have been proposed in many different countries, by many different authors, under many different names. In the first major section of the paper we analyze the many different methods and show that they all lead to the same equations for analyzing the same data. In the second major section of the paper we introduce the notion of a duality diagram, and use this diagram to synthesize the many superficially different methods into a single method.The ideas in this paper were worked out by the first author, with some suggestions provided by the second. The current version of this paper has evolved from three previous versions, the first two written by the first author.  相似文献   

15.
Bailey and Gower examined the least squares approximationC to a symmetric matrixB, when the squared discrepancies for diagonal elements receive specific nonunit weights. They focussed on mathematical properties of the optimalC, in constrained and unconstrained cases, rather than on how to obtainC for any givenB. In the present paper a computational solution is given for the case whereC is constrained to be positive semidefinite and of a fixed rankr or less. The solution is based on weakly constrained linear regression analysis.The authors are obliged to John C. Gower for stimulating this research.  相似文献   

16.
The recent history of multidimensional data analysis suggests two distinct traditions that have developed along quite different lines. In multidimensional scaling (MDS), the available data typically describe the relationships among a set of objects in terms of similarity/dissimilarity (or (pseudo-)distances). In multivariate analysis (MVA), data usually result from observation on a collection of variables over a common set of objects. This paper starts from a very general multidimensional scaling task, defined on distances between objects derived from one or more sets of multivariate data. Particular special cases of the general problem, following familiar notions from MVA, will be discussed that encompass a variety of analysis techniques, including the possible use of optimal variable transformation. Throughout, it will be noted how certain data analysis approaches are equivalent to familiar MVA solutions when particular problem specifications are combined with particular distance approximations.This research was supported by the Royal Netherlands Academy of Arts and Sciences (KNAW). An earlier version of this paper was written during a stay at McGill University in Montréal; this visit was supported by a travel grant from the Netherlands Organization for Scientific Research (NWO). I am grateful to Jim Ramsay and Willem Heiser for their encouragement and helpful suggestions, and to the Editor and referees for their constructive comments.  相似文献   

17.
Millsap and Meredith (1988) have developed a generalization of principal components analysis for the simultaneous analysis of a number of variables observed in several populations or on several occasions. The algorithm they provide has some disadvantages. The present paper offers two alternating least squares algorithms for their method, suitable for small and large data sets, respectively. Lower and upper bounds are given for the loss function to be minimized in the Millsap and Meredith method. These can serve to indicate whether or not a global optimum for the simultaneous components analysis problem has been attained.Financial support by the Netherlands organization for scientific research (NWO) is gratefully acknowledged.  相似文献   

18.
19.
A common criticism of iterative least squares estimates of communality is that method of initial estimation may influence stabilized values. As little systematic research on this topic has been performed, the criticism appears to be based on cumulated experience with empirical data sets. In the present paper, two studies are reported in which four types of initial estimate (unities, squared multiple correlations, highestr, and zeroes) and four levels of convergence criterion were employed using four widely available computer packages (BMDP, SAS, SPSS, and SOUPAC). The results suggest that initial estimates have no effect on stabilized communality estimates when a stringent criterion for convergence is used, whereas initial estimates appear to affect stabilized values employing rather gross convergence criteria. There were no differences among the four computer packages for matrices without Heywood cases.  相似文献   

20.
Points of view analysis (PVA), proposed by Tucker and Messick in 1963, was one of the first methods to deal explicitly with individual differences in multidimensional scaling, but at some point was apparently superceded by the weighted Euclidean model, well-known as the Carroll and Chang INDSCAL model. This paper argues that the idea behind points of view analysis deserves new attention, especially as a technique to analyze group differences. A procedure is proposed that can be viewed as a streamlined, integrated version of the Tucker and Messick Process, which consisted of a number of separate steps. At the same time, our procedure can be regarded as a particularly constrained weighted Euclidean model. While fitting the model, two types of nonlinear data transformations are feasible, either for given dissimilarities, or for variables from which the dissimilarities are derived. Various applications are discussed, where the two types of transformation can be mixed in the same analysis; a quadratic assignment framework is used to evaluate the results.The research of the first author was supported by the Royal Netherlands Academy of Arts and Sciences (KNAW); the research of the second author by the Netherlands Organization for Scientific Research (NWO Grant 560-267-029). An earlier version of this paper was presented at the European Meeting of the Psychometric Society, Leuven, 1989. We wish to thank Willem J. Heiser for his stimulating comments to earlier versions of this paper, and we are grateful to the Editor and anonymous referees for their helpful suggestions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号