首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   296篇
  免费   22篇
  国内免费   21篇
  2023年   5篇
  2022年   2篇
  2021年   5篇
  2020年   9篇
  2019年   8篇
  2018年   8篇
  2017年   11篇
  2016年   9篇
  2015年   3篇
  2014年   8篇
  2013年   25篇
  2012年   6篇
  2011年   8篇
  2010年   6篇
  2009年   12篇
  2008年   8篇
  2007年   10篇
  2006年   8篇
  2005年   17篇
  2004年   4篇
  2003年   6篇
  2002年   2篇
  2001年   4篇
  2000年   4篇
  1999年   5篇
  1998年   6篇
  1997年   7篇
  1996年   11篇
  1995年   3篇
  1994年   3篇
  1993年   7篇
  1992年   5篇
  1991年   12篇
  1990年   2篇
  1989年   6篇
  1988年   5篇
  1987年   4篇
  1986年   6篇
  1985年   9篇
  1984年   6篇
  1983年   3篇
  1982年   7篇
  1981年   1篇
  1980年   10篇
  1979年   9篇
  1978年   9篇
  1977年   8篇
  1976年   7篇
排序方式: 共有339条查询结果,搜索用时 15 毫秒
1.
Canonical analysis of two convex polyhedral cones and applications   总被引:1,自引:0,他引:1  
Canonical analysis of two convex polyhedral cones consists in looking for two vectors (one in each cone) whose square cosine is a maximum. This paper presents new results about the properties of the optimal solution to this problem, and also discusses in detail the convergence of an alternating least squares algorithm. The set of scalings of an ordinal variable is a convex polyhedral cone, which thus plays an important role in optimal scaling methods for the analysis of ordinal data. Monotone analysis of variance, and correspondence analysis subject to an ordinal constraint on one of the factors are both canonical analyses of a convex polyhedral cone and a subspace. Optimal multiple regression of a dependent ordinal variable on a set of independent ordinal variables is a canonical analysis of two convex polyhedral cones as long as the signs of the regression coefficients are given. We discuss these three situations and illustrate them by examples.  相似文献   
2.
This paper discusses least squares methods for fitting a reformulation of the general Euclidean model for the external analysis of preference data. The reformulated subject weights refer to a common set of reference vectors for all subjects and hence are comparable across subjects. If the rotation of the stimulus space is fixed, the subject weight estimates in the model are uniquely determined. Weight estimates can be guaranteed nonnegative. While the reformulation is a metric model for single stimulus data, the paper briefly discusses extensions to nonmetric, pairwise, and logistic models. The reformulated model is less general than Carroll's earlier formulation.The author is grateful to Christopher J. Nachtsheim for his helpful suggestions.  相似文献   
3.
This paper suggests a method to supplant missing categorical data by reasonable replacements. These replacements will maximize the consistency of the completed data as measured by Guttman's squared correlation ratio. The text outlines a solution of the optimization problem, describes relationships with the relevant psychometric theory, and studies some properties of the method in detail. The main result is that the average correlation should be at least 0.50 before the method becomes practical. At that point, the technique gives reasonable results up to 10–15% missing data.We thank Anneke Bloemhoff of NIPG-TNO for compiling and making the Dutch Life Style Survey data available to use, and Chantal Houée and Thérèse Bardaine, IUT, Vannes, France, exchange students under the COMETT program of the EC, for computational assistance. We also thank Donald Rubin, the Editors and several anonymous reviewers for constructive suggestions.  相似文献   
4.
The paper derives sufficient conditions for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis.  相似文献   
5.
An Extended Two-Way Euclidean Multidimensional Scaling (MDS) model which assumes both common and specific dimensions is described and contrasted with the standard (Two-Way) MDS model. In this Extended Two-Way Euclidean model then stimuli (or other objects) are assumed to be characterized by coordinates onR common dimensions. In addition each stimulus is assumed to have a dimension (or dimensions) specific to it alone. The overall distance between objecti and objectj then is defined as the square root of the ordinary squared Euclidean distance plus terms denoting the specificity of each object. The specificity,s j , can be thought of as the sum of squares of coordinates on those dimensions specific to objecti, all of which have nonzero coordinatesonly for objecti. (In practice, we may think of there being just one such specific dimension for each object, as this situation is mathematically indistinguishable from the case in which there are more than one.)We further assume that ij =F(d ij ) +e ij where ij is the proximity value (e.g., similarity or dissimilarity) of objectsi andj,d ij is the extended Euclidean distance defined above, whilee ij is an error term assumed i.i.d.N(0, 2).F is assumed either a linear function (in the metric case) or a monotone spline of specified form (in the quasi-nonmetric case). A numerical procedure alternating a modified Newton-Raphson algorithm with an algorithm for fitting an optimal monotone spline (or linear function) is used to secure maximum likelihood estimates of the paramstatistics) can be used to test hypotheses about the number of common dimensions, and/or the existence of specific (in addition toR common) dimensions.This approach is illustrated with applications to both artificial data and real data on judged similarity of nations.  相似文献   
6.
Correspondence analysis used complementary to loglinear analysis   总被引:1,自引:0,他引:1  
Loglinear analysis and correspondence analysis provide us with two different methods for the decomposition of contingency tables. In this paper we will show that there are cases in which these two techniques can be used complementary to each other. More specifically, we will show that often correspondence analysis can be viewed as providing a decomposition of the difference between two matrices, each following a specific loglinear model. Therefore, in these cases the correspondence analysis solution can be interpreted in terms of the difference between these loglinear models. A generalization of correspondence analysis, recently proposed by Escofier, will also be discussed. With this decomposition, which includes classical correspondence analysis as a special case, it is possible to use correspondence analysis complementary to loglinear analysis in more instances than those described for classical correspondence analysis. In this context correspondence analysis is used for the decomposition of the residuals of specific restricted loglinear models.  相似文献   
7.
Three-way metric unfolding via alternating weighted least squares   总被引:6,自引:3,他引:3  
Three-way unfolding was developed by DeSarbo (1978) and reported in DeSarbo and Carroll (1980, 1981) as a new model to accommodate the analysis of two-mode three-way data (e.g., nonsymmetric proximities for stimulus objects collected over time) and three-mode, three-way data (e.g., subjects rendering preference judgments for various stimuli in different usage occasions or situations). This paper presents a revised objective function and new algorithm which attempt to prevent the common type of degenerate solutions encountered in typical unfolding analysis. We begin with an introduction of the problem and a review of three-way unfolding. The three-way unfolding model, weighted objective function, and new algorithm are presented. Monte Carlo work via a fractional factorial experimental design is described investigating the effect of several data and model factors on overall algorithm performance. Finally, three applications of the methodology are reported illustrating the flexibility and robustness of the procedure.We wish to thank the editor and reviewers for their insightful comments.  相似文献   
8.
The properties of nonmetric multidimensional scaling (NMDS) are explored by specifying statistical models, proving statistical consistency, and developing hypothesis testing procedures. Statistical models with errors in the dependent and independent variables are described for quantitative and qualitative data. For these models, statistical consistency often depends crucially upon how error enters the model and how data are collected and summarized (e.g., by means, medians, or rank statistics). A maximum likelihood estimator for NMDS is developed, and its relationship to the standard Shepard-Kruskal estimation method is described. This maximum likelihood framework is used to develop a method for testing the overall fit of the model.  相似文献   
9.
Three methods for estimating reliability are studied within the context of nonparametric item response theory. Two were proposed originally by Mokken (1971) and a third is developed in this paper. Using a Monte Carlo strategy, these three estimation methods are compared with four classical lower bounds to reliability. Finally, recommendations are given concerning the use of these estimation methods.The authors are grateful for constructive comments from the reviewers and from Charles Lewis.  相似文献   
10.
The choice of constraints in correspondence analysis   总被引:2,自引:0,他引:2  
A discussion of alternative constraint systems has been lacking in the literature on correspondence analysis and related techniques. This paper reiterates earlier results that an explicit choice of constraints has to be made which can have important effects on the resulting scores. The paper also presents new results on dealing with missing data and probabilistic category assignment.I am most grateful to the following for their helpful comments. Arto Demirjian, Michael Greenacre, Michael Healy, Shizuhiko Nishisato, Roderick Mcdonald, and several anonymous referees.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号