首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   65篇
  免费   0篇
  2022年   3篇
  2020年   2篇
  2019年   1篇
  2016年   3篇
  2013年   5篇
  2012年   3篇
  2011年   4篇
  2008年   2篇
  2007年   2篇
  2006年   4篇
  2004年   3篇
  2003年   1篇
  2002年   2篇
  2001年   1篇
  2000年   2篇
  1998年   2篇
  1996年   1篇
  1995年   1篇
  1993年   2篇
  1991年   2篇
  1990年   1篇
  1987年   4篇
  1983年   1篇
  1982年   2篇
  1981年   3篇
  1980年   1篇
  1978年   4篇
  1977年   1篇
  1976年   2篇
排序方式: 共有65条查询结果,搜索用时 15 毫秒
21.
A maximum likelihood estimation procedure is developed for multidimensional scaling when (dis)similarity measures are taken by ranking procedures such as the method of conditional rank orders or the method of triadic combinations. The central feature of these procedures may be termed directionality of ranking processes. That is, rank orderings are performed in a prescribed order by successive first choices. Those data have conventionally been analyzed by Shepard-Kruskal type of nonmetric multidimensional scaling procedures. We propose, as a more appropriate alternative, a maximum likelihood method specifically designed for this type of data. A broader perspective on the present approach is given, which encompasses a wide variety of experimental methods for collecting dissimilarity data including pair comparison methods (such as the method of tetrads) and the pick-M method of similarities. An example is given to illustrate various advantages of nonmetric maximum likelihood multidimensional scaling as a statistical method. At the moment the approach is limited to the case of one-mode two-way proximity data, but could be extended in a relatively straightforward way to two-mode two-way, two-mode three-way or even three-mode three-way data, under the assumption of such models as INDSCAL or the two or three-way unfolding models.The first author's work was supported partly by the Natural Sciences and Engineering Research Council of Canada, grant number A6394. Portions of this research were done while the first author was at Bell Laboratories. MAXSCAL-4.1, a program to perform the computations described in this paper can be obtained by writing to: Computing Information Service, Attention: Ms. Carole Scheiderman, Bell Laboratories, 600 Mountain Ave., Murray Hill, N.J. 07974. Thanks are due to Yukio Inukai, who generously let us use his stimuli in our experiment, and to Jim Ramsay for his helpful comments on an earlier draft of this paper. Confidence regions in Figures 2 and 3 were drawn by the program written by Jim Ramsay. We are also indebted to anonymous reviewers for their suggestions.  相似文献   
22.
A method for structural analysis of multivariate data is proposed that combines features of regression analysis and principal component analysis. In this method, the original data are first decomposed into several components according to external information. The components are then subjected to principal component analysis to explore structures within the components. It is shown that this requires the generalized singular value decomposition of a matrix with certain metric matrices. The numerical method based on the QR decomposition is described, which simplifies the computation considerably. The proposed method includes a number of interesting special cases, whose relations to existing methods are discussed. Examples are given to demonstrate practical uses of the method.The work reported in this paper was supported by grant A6394 from the Natural Sciences and Engineering Research Council of Canada to the first author. Thanks are due to Jim Ramsay, Haruo Yanai, Henk Kiers, and Shizuhiko Nishisato for their insightful comments on earlier versions of this paper. Jim Ramsay, in particular, suggested the use of the QR decomposition, which simplified the presentation of the paper considerably.  相似文献   
23.
A new procedure is discussed which fits either the weighted or simple Euclidian model to data that may (a) be defined at either the nominal, ordinal, interval or ratio levels of measurement; (b) have missing observations; (c) be symmetric or asymmetric; (d) be conditional or unconditional; (e) be replicated or unreplicated; and (f) be continuous or discrete. Various special cases of the procedure include the most commonly used individual differences multidimensional scaling models, the familiar nonmetric multidimensional scaling model, and several other previously undiscussed variants.The procedure optimizes the fit of the model directly to the data (not to scalar products determined from the data) by an alternating least squares procedure which is convergent, very quick, and relatively free from local minimum problems.The procedure is evaluated via both Monte Carlo and empirical data. It is found to be robust in the face of measurement error, capable of recovering the true underlying configuration in the Monte Carlo situation, and capable of obtaining structures equivalent to those obtained by other less general procedures in the empirical situation.This project was supported in part by Research Grant No. MH10006 and Research Grant No. MH26504, awarded by the National Institute of Mental Health, DHEW. We wish to thank Robert F. Baker, J. Douglas Carroll, Joseph Kruskal, and Amnon Rapoport for comments on an earlier draft of this paper. Portions of the research reported here were presented to the spring meeting of the Psychometric Society, 1975. ALSCAL, a program to perform the computations discussed in this paper, may be obtained from any of the authors.Jan de Leeuw is currently at Datatheorie, Central Rekeninstituut, Wassenaarseweg 80, Leiden, The Netherlands. Yoshio Takane can be reached at the Department of Psychology, University of Tokyo, Tokyo, Japan.  相似文献   
24.
Ideal point discriminant analysis   总被引:1,自引:0,他引:1  
A new method of multiple discriminant analysis was developed that allows a mixture of continuous and discrete predictors. The method can be justified under a wide class of distributional assumptions on the predictor variables. The method can also handle three different sampling situations, conditional, joint and separate. In this method both subjects (cases or any other sampling units) and criterion groups are represented as points in a multidimensional euclidean space. The probability of a particular subject belonging to a particular criterion group is stated as a decreasing function of the distance between the corresponding points. A maximum likelihood estimation procedure was developed and implemented in the form of a FORTRAN program. Detailed analyses of two real data sets were reported to demonstrate various advantages of the proposed method. These advantages mostly derive from model evaluation capabilities based on the Akaike Information Criterion (AIC).The work reported in this paper has been supported by Grant A6394 from the Natural Sciences and Engineering Research Council of Canada and by a leave grant from the Social Sciences and Humanities Research Council of Canada to the first author. Portions of this study were conducted while the first author was at the Institute of Statistical Mathematics in Tokyo on leave from McGill University. He would like to express his gratitude to members of the Institute for their hospitality. Thanks are also due to T. Komazawa at the Institute for letting us use his data, to W. J. Krzanowski at the University of Reading for providing us with Armitage, McPherson, and Copas' data, and to Don Ramirez, Jim Ramsay and Stan Sclove for their helpful comments on an earlier draft of this paper.  相似文献   
25.
Cross-classified data are frequently encountered in behavioral and social science research. The loglinear model and dual scaling (correspondence analysis) are two representative methods of analyzing such data. An alternative method, based on ideal point discriminant analysis (DA), is proposed for analysis of contingency tables, which in a certain sense encompasses the two existing methods. A variety of interesting structures can be imposed on rows and columns of the tables through manipulations of predictor variables and/or as direct constraints on model parameters. This, along with maximum likelihood estimation of the model parameters, allows interesting model comparisons. This is illustrated by the analysis of several data sets.Presented as the Presidential Address to the Psychometric Society's Annual and European Meetings, June, 1987. Preparation of this paper was supported by grant A6394 from the Natural Sciences and Engineering Research Council of Canada. Thanks are due to Chikio Hayashi of University of the Air in Japan for providing the ISM data, and to Jim Ramsay and Ivo Molenaar for their helpful comments on an earlier draft of this paper.  相似文献   
26.
27.
It is reported that (1) a new coordinate estimation routine is superior to that originally proposed for ALSCAL; (2) an oversight in the interval measurement level case has been found and corrected; and (3) a new initial configuration routine is superior to the original.  相似文献   
28.
In the “pick any/n” method, subjects are asked to choose any number of items from a list of n items according to some criterion. This kind of data can be analyzed as a special case of either multiple-choice data or successive categories data where the number of response categories is limited to two. An item response model was proposed for the latter case, which is a combination of an unfolding model and a choice model. The marginal maximum-likelihood estimation method was developed for parameter estimation to avoid incidental parameters, and an expectation-maximization algorithm used for numerical optimization. Two examples of analysis are given to illustrate the proposed method, which we call MAXSC.  相似文献   
29.
A program is described for principal component analysis with external information on subjects and variables. This method is calledconstrained principal component analysis (CPCA), in which regression analysis and principal component analysis are combined into a unified framework that allows a full exploration of data structures both within and outside known information on subjects and variables. Many existing methods are special cases of CPCA, and the program can be used for multivariate multiple regression, redundancy analysis, double redundancy analysis, dual scaling with external criteria, vector preference models, and GMANOVA (growth curve models).  相似文献   
30.
A generalization of Takane's algorithm for dedicom   总被引:1,自引:0,他引:1  
An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices. This algorithm generalizes an algorithm suggested by Takane in that it uses a damping parameter in the iterative process. Takane's algorithm does not always converge monotonically. Based on the generalized algorithm, a modification of Takane's algorithm is suggested such that this modified algorithm converges monotonically. It is suggested to choose as starting configurations for the algorithm those configurations that yield closed-form solutions in some special cases. Finally, a sufficient condition is described for monotonic convergence of Takane's original algorithm.Financial Support by the Netherlands organization for scientific research (NWO) is gratefully acknowledged. The authors are obliged to Richard Harshman.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号