首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with approximately 26 or fewer objects because of computer memory limitations. We present some new branch-and-bound procedures that improve computational efficiency, and enable guaranteed globally optimal solutions to be obtained for matrices with up to 35 objects. Experimental tests were conducted to compare the relative performances of the new procedures, a previously published branch-and-bound algorithm, and a dynamic programming solution strategy. These experiments, which included both synthetic and empirical dissimilarity matrices, yielded the following findings: (a) the new branch-and-bound procedures were often drastically more efficient than the previously published branch-and-bound algorithm, (b) when computationally feasible, the dynamic programming approach was more efficient than each of the branch-and-bound procedures, and (c) the new branch-and-bound procedures require minimal computer memory and can provide optimal solutions for matrices that are too large for dynamic programming implementation.The authors gratefully acknowledge the helpful comments of three anonymous reviewers and the Editor. We especially thank Larry Hubert and one of the reviewers for providing us with the MATLAB files for optimal and heuristic least-squares unidimensional scaling methods.This revised article was published online in June 2005 with all corrections incorporated.  相似文献   

2.
This paper presents a new stochastic multidimensional scaling vector threshold model designed to analyze pick any/n choice data (e.g., consumers rendering buy/no buy decisions concerning a number of actual products). A maximum likelihood procedure is formulated to estimate a joint space of both individuals (represented as vectors) and stimuli (represented as points). The relevant psychometric literature concerning the spatial treatment of such binary choice data is reviewed. The nonlinear probit type model is described, as well as the conjugate gradient procedure used to estimate parameters. Results of Monte Carlo analyses investigating the performance of this methodology with synthetic choice data sets are presented. An application concerning consumer choices for eleven competitive brands of soft drinks is discussed. Finally, directions for future research are presented in terms of further applications and generalizing the model to accommodate three-way choice data.  相似文献   

3.
A new nonmetric multidimensional scaling method is devised to analyze three-way data concerning inter-stimulus similarities obtained from many subjects. It is assumed that subjects are classified into a small number of clusters and that the stimulus configuration is specific to each cluster. Under this assumption, the classification of subjects and the scaling used to derive the configurations for clusters are simultaneously performed using an alternating least-squares algorithm. The monotone regression of ordinal similarity data, the scaling of stimuli and the K -means clustering of subjects are iterated in the algorithm. The method is assessed using a simulation and its practical use is illustrated with the analysis of real data. Finally, some extensions are considered.  相似文献   

4.
The vast majority of existing multidimensional scaling (MDS) procedures devised for the analysis of paired comparison preference/choice judgments are typically based on either scalar product (i.e., vector) or unfolding (i.e., ideal-point) models. Such methods tend to ignore many of the essential components of microeconomic theory including convex indifference curves, constrained utility maximization, demand functions, et cetera. This paper presents a new stochastic MDS procedure called MICROSCALE that attempts to operationalize many of these traditional microeconomic concepts. First, we briefly review several existing MDS models that operate on paired comparisons data, noting the particular nature of the utility functions implied by each class of models. These utility assumptions are then directly contrasted to those of microeconomic theory. The new maximum likelihood based procedure, MICROSCALE, is presented, as well as the technical details of the estimation procedure. The results of a Monte Carlo analysis investigating the performance of the algorithm as a number of model, data, and error factors are experimentally manipulated are provided. Finally, an illustration in consumer psychology concerning a convenience sample of thirty consumers providing paired comparisons judgments for some fourteen brands of over-the-counter analgesics is discussed.  相似文献   

5.
Goodman contributed to the theory of scaling by including a category of intrinsically unscalable respondents in addition to the usual scale-type respondents. However, his formulation permits only error-free responses by respondents from the scale types. This paper presents new scaling models which have the properties that: (1) respondents in the scale types are subject to response errors; (2) a test of significance can be constructed to assist in deciding on the necessity for including an intrinsically unscalable class in the model; and (3) when an intrinsically unscalable class is not needed to explain the data, the model reduces to a probabilistic, rather than to a deterministic, form. Three data sets are analyzed with the new models and are used to illustrate stages of hypothesis testing.  相似文献   

6.
Multidimensional scaling has recently been enhanced so that data defined at only the nominal level of measurement can be analyzed. The efficacy of ALSCAL, an individual differences multidimensional scaling program which can analyze data defined at the nominal, ordinal, interval and ratio levels of measurement, is the subject of this paper. A Monte Carlo study is presented which indicates that (a) if we know the correct level of measurement then ALSCAL can be used to recover the metric information presumed to underlie the data; and that (b) if we do not know the correct level of measurement then ALSCAL can be used to determine the correct level and to recover the underlying metric structure. This study also indicates, however, that with nominal data ALSCAL is quite likely to obtain solutions which are not globally optimal, and that in these cases the recovery of metric structure is quite poor. A second study is presented which isolates the potential cause of these problems and forms the basis for a suggested modification of the ALSCAL algorithm which should reduce the frequency of locally optimal solutions.  相似文献   

7.
Some historical background and preliminary technical information are first presented, and then a number of hidden, but important, methodological aspects of dual scaling are illustrated and discussed: normed versus projected weights, the amount of information accounted for by each solution, a perfect solution to the problem of multidimensional unfolding, multidimensional quantification space, graphical display, number-of-option problems, option standardization versus item standardization, and asymmetry of symmetric (dual) scaling. Contrary to the common perception that dual scaling and similar quantification methods are now mathematically transparent, the present study demonstrates how much more needs to be clarified for routine use of the method to arrive at valid conclusions. Data analysis must be carried out in such a way that common sense, intuition and sound logic will prevail.Presidential Address delivered at the Annual Meeting of the Psychometric Society, Banff Centre for Conferences, Banff, Alberta, Canada, June 27–30, 1996. The work has been supported in part by a grant from the Natural Sciences and Engineering Research Council of Canada. I am grateful to Ira Nishisato for his comments, Ingram Olkin and Yoshio Takane for important references, and Liqun Xu for computational help.  相似文献   

8.
The proposed method handles the classical method of reciprocal averages (MRA) in a piecewise (item-by-item) mode, whereby one can deal with smaller matrices and attain faster convergence to a solution than the MRA. A new concept the principle of constant proportionality is introduced to provide an interesting interpretation for scaling multiple-choice data a la Guttman. A small example is presented for discussion of the technique.This study was supported by the Natural Sciences and Engineering Research Council Canada Grant (No. A4581) to S. Nishisato. The authors are indebted to reviewers for valuable comments.  相似文献   

9.
This paper is concerned with the development of a measure of the precision of a multidimensional euclidean structure. The measure is a precision index for each point in the structure, assuming that all the other points are precisely located. The measure is defined and two numerical methods are presented for its calculation. A small Monte Carlo study of the measure's behavior is performed and findings discussed.The authors are indebted to Bert F. Green, Jr., Ronald Helms, Andrea Sedlak, and three anonymous reviewers for their valuable comments on earlier drafts of this paper.  相似文献   

10.
In response to Arabie several random ranking studies are compared and discussed. Differences are typically very small, however it is noted that those studies which used arbitrary configurations tend to produce slightly higher stress values. The choice of starting configuration is discussed and we suggest that the use of a principal components decomposition of the doubly centered matrix of dissimilarities, or some transformation thereof, will yield an initial configuration which is superior to a randomly chosen one.This research was supported by the National Research Council of Canada (Grant No. A8351) and by the National Institute of Mental Health (Grant Nos. MH10006 and MH26504). The authorship order has been determined by Monte Carlo methods.  相似文献   

11.
Restricted multidimensional scaling models for asymmetric proximities   总被引:1,自引:0,他引:1  
Restricted multidimensional scaling models [Bentler & Weeks, 1978] allowing constraints on parameters, are extended to the case of asymmetric data. Separate functions are used to model the symmetric and antisymmetric parts of the data. The approach is also extended to the case in which data are presumed to be linearly related to squared distances. Examples of several models are provided, using journal citation data. Possible extensions of the models are considered. This research was supported in part by USPHS Grant 0A01070, P. M. Bentler, principal investigator, and NIMH Grant MH-24819, E. J. Anthony and J. Worland, principal investigators. The authors wish to thank E. W. Holman and several anonymous reviewers for their valuable suggestions concerning this research.  相似文献   

12.
13.
This paper suggests a method to supplant missing categorical data by reasonable replacements. These replacements will maximize the consistency of the completed data as measured by Guttman's squared correlation ratio. The text outlines a solution of the optimization problem, describes relationships with the relevant psychometric theory, and studies some properties of the method in detail. The main result is that the average correlation should be at least 0.50 before the method becomes practical. At that point, the technique gives reasonable results up to 10–15% missing data.We thank Anneke Bloemhoff of NIPG-TNO for compiling and making the Dutch Life Style Survey data available to use, and Chantal Houée and Thérèse Bardaine, IUT, Vannes, France, exchange students under the COMETT program of the EC, for computational assistance. We also thank Donald Rubin, the Editors and several anonymous reviewers for constructive suggestions.  相似文献   

14.
In the distance approach to nonlinear multivariate data analysis the focus is on the optimal representation of the relationships between the objects in the analysis. In this paper two methods are presented for including weights in distance-based nonlinear multivariate data analysis. In the first method, weights are assigned to the objects while the second method is concerned with differential weighting of groups of variables. When each analysis variable defines a group the latter method becomes a variable weighting method. For objects the weights are assumed to be given; for groups of variables they may be given, or estimated. These weighting schemes can also be combined and have several important applications. For example, they make it possible to perform efficient analyses of large data sets, to use the distance-based variety of nonlinear multivariate data analysis as an addition to loglinear analysis of multiway contingency tables, and to do stability studies of the solutions by applying the bootstrap on the objects or the variables in the analysis. These and other applications are discussed, and an efficient algorithm is proposed to minimize the corresponding loss function.This study is funded by The Netherlands Organization for Scientific Research (NWO) by grant nr. 030-56403 for the PIONEER project Subject Oriented Multivariate Analysis to the third author.  相似文献   

15.
A Monte Carlo study was carried out in order to investigate the ability of ALSCAL to recover true structure inherent in simulated proximity measures when portions of the data are missing. All sets of simulated proximity measures were based on 30 stimuli and three dimensions, and selection of missing elements was done randomly. Properties of the simulated data varied according to (a) the number of individuals, (b) the level of random error, (c) the proportion of missing data, and (d) whether the same entries or different entries were deleted for each individual. Results showed that very accurate recovery of true distances, stimulus coordinates, and weight vectors could be achieved with as much as 60% missing data as long as sample size was sufficiently large and the level of random error was low.  相似文献   

16.
Together with melody, harmony, and timbre, rhythm and beat provide temporal structure for movement timing. Such musical features may act as cues to the phrasing and dynamics of a dance choreographed to the music. Novice dancers (N = 54) learned to criterion a novel 32‐s dance‐pop routine, either to full music or to the rhythm of that music. At test, participants recalled the dance to the same music, rhythm, new music, and in silence. If musical features aid memory, then full music during learning and test should result in superior dance recall, whereas if rhythm alone aids memory, then rhythm during learning and test should result in superior recall. The presence of a rhythm accompaniment during learning provided a significantly greater memory advantage for the recall of dance‐pop steps than full music. After learning to full music, silence at test enhanced recall. Findings are discussed in terms of entrainment and cognitive load. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
18.
A new computational method to fit the weighted euclidean distance model   总被引:1,自引:0,他引:1  
This paper describes a computational method for weighted euclidean distance scaling which combines aspects of an analytic solution with an approach using loss functions. We justify this new method by giving a simplified treatment of the algebraic properties of a transformed version of the weighted distance model. The new algorithm is much faster than INDSCAL yet less arbitrary than other analytic procedures. The procedure, which we call SUMSCAL (subjectivemetricscaling), gives essentially the same solutions as INDSCAL for two moderate-size data sets tested.Comments by J. Douglas Carroll and J. B. Kruskal have been very helpful in preparing this paper.  相似文献   

19.
The choice of constraints in correspondence analysis   总被引:2,自引:0,他引:2  
A discussion of alternative constraint systems has been lacking in the literature on correspondence analysis and related techniques. This paper reiterates earlier results that an explicit choice of constraints has to be made which can have important effects on the resulting scores. The paper also presents new results on dealing with missing data and probabilistic category assignment.I am most grateful to the following for their helpful comments. Arto Demirjian, Michael Greenacre, Michael Healy, Shizuhiko Nishisato, Roderick Mcdonald, and several anonymous referees.  相似文献   

20.
The study examines the effects of social identity and knowledge quality on knowledge transfer across groups. One hundred and forty-four students performed a production task in three-person groups. Midway through the task, a member from a different group rotated into each group. The primary dependent variable was whether the group adopted the production routine of the rotating member. Analyses revealed the predicted main and interactive effects. Groups were more likely to adopt the routine of a rotator when they shared a superordinate social identity with that member than when they did not. Groups were also more likely to adopt a routine from a rotator when it was superior than when it was inferior to their own. Further, superordinate groups adopted the production routine of the rotator when it was superior but not inferior to their own, whereas groups that did not share a superordinate identity with the rotator generally did not adopt the rotator’s production routine, even when it was superior to their own and would have improved their performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号