首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   672篇
  免费   59篇
  国内免费   45篇
  2024年   3篇
  2023年   27篇
  2022年   26篇
  2021年   33篇
  2020年   49篇
  2019年   42篇
  2018年   33篇
  2017年   42篇
  2016年   45篇
  2015年   15篇
  2014年   23篇
  2013年   61篇
  2012年   11篇
  2011年   12篇
  2010年   9篇
  2009年   18篇
  2008年   16篇
  2007年   20篇
  2006年   21篇
  2005年   21篇
  2004年   16篇
  2003年   13篇
  2002年   20篇
  2001年   10篇
  2000年   11篇
  1999年   6篇
  1998年   9篇
  1997年   6篇
  1996年   7篇
  1995年   18篇
  1994年   4篇
  1993年   8篇
  1992年   6篇
  1991年   7篇
  1990年   6篇
  1989年   13篇
  1988年   9篇
  1987年   5篇
  1986年   5篇
  1985年   4篇
  1984年   3篇
  1983年   6篇
  1982年   6篇
  1981年   3篇
  1980年   4篇
  1979年   15篇
  1978年   9篇
  1977年   8篇
  1976年   7篇
  1974年   2篇
排序方式: 共有776条查询结果,搜索用时 609 毫秒
1.
This paper discusses least squares methods for fitting a reformulation of the general Euclidean model for the external analysis of preference data. The reformulated subject weights refer to a common set of reference vectors for all subjects and hence are comparable across subjects. If the rotation of the stimulus space is fixed, the subject weight estimates in the model are uniquely determined. Weight estimates can be guaranteed nonnegative. While the reformulation is a metric model for single stimulus data, the paper briefly discusses extensions to nonmetric, pairwise, and logistic models. The reformulated model is less general than Carroll's earlier formulation.The author is grateful to Christopher J. Nachtsheim for his helpful suggestions.  相似文献   
2.
This paper develops a method of optimal scaling for multivariate ordinal data, in the framework of a generalized principal component analysis. This method yields a multidimensional configuration of items, a unidimensional scale of category weights for each item and, optionally, a multidimensional configuration of subjects. The computation is performed by alternately solving an eigenvalue problem and executing a quasi-Newton projection method. The algorithm is extended for analysis of data with mixed measurement levels or for analysis with a combined weighting of items. Numerical examples and simulations are provided. The algorithm is discussed and compared with some related methods.Earlier results of this research appeared in Saito and Otsu (1983). The authors would like to acknowledge the helpful comments and encouragement of the editor.  相似文献   
3.
Hierarchical classes: Model and data analysis   总被引:1,自引:0,他引:1  
A discrete, categorical model and a corresponding data-analysis method are presented for two-way two-mode (objects × attributes) data arrays with 0, 1 entries. The model contains the following two basic components: a set-theoretical formulation of the relations among objects and attributes; a Boolean decomposition of the matrix. The set-theoretical formulation defines a subset of the possible decompositions as consistent with it. A general method for graphically representing the set-theoretical decomposition is described. The data-analysis algorithm, dubbed HICLAS, aims at recovering the underlying structure in a data matrix by minimizing the discrepancies between the data and the recovered structure. HICLAS is evaluated with a simulation study and two empirical applications.This research was supported in part by a grant from the Belgian NSF (NFWO) to Paul De Boeck and in part by NSF Grant BNS-83-01027 to Seymour Rosenberg. We thank Iven Van Mechelen for clarifying several aspects of the Boolean algebraic formulation of the model and Phipps Arabie for his comments on an earlier draft.  相似文献   
4.
This paper suggests a method to supplant missing categorical data by reasonable replacements. These replacements will maximize the consistency of the completed data as measured by Guttman's squared correlation ratio. The text outlines a solution of the optimization problem, describes relationships with the relevant psychometric theory, and studies some properties of the method in detail. The main result is that the average correlation should be at least 0.50 before the method becomes practical. At that point, the technique gives reasonable results up to 10–15% missing data.We thank Anneke Bloemhoff of NIPG-TNO for compiling and making the Dutch Life Style Survey data available to use, and Chantal Houée and Thérèse Bardaine, IUT, Vannes, France, exchange students under the COMETT program of the EC, for computational assistance. We also thank Donald Rubin, the Editors and several anonymous reviewers for constructive suggestions.  相似文献   
5.
Millsap and Meredith (1988) have developed a generalization of principal components analysis for the simultaneous analysis of a number of variables observed in several populations or on several occasions. The algorithm they provide has some disadvantages. The present paper offers two alternating least squares algorithms for their method, suitable for small and large data sets, respectively. Lower and upper bounds are given for the loss function to be minimized in the Millsap and Meredith method. These can serve to indicate whether or not a global optimum for the simultaneous components analysis problem has been attained.Financial support by the Netherlands organization for scientific research (NWO) is gratefully acknowledged.  相似文献   
6.
An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based upon a solution for Mosier's oblique Procrustes rotation problem offered by ten Berge and Nevels. A necessary and sufficient condition is given for a solution to yield the unique global minimum of the least-squares function. Empirical verification of the condition indicates that the occurrence of non-optimal solutions with the proposed algorithm is very unlikely. A possible drawback of the optimal solution is that it is a singular matrix of necessity. In cases where singularity is undesirable, one may impose the additional nonsingularity constraint that the smallest eigenvalue of the solution be , where is an arbitrary small positive constant. Finally, it may be desirable to weight the squared errors of estimation differentially. A generalized solution is derived which satisfies the additional nonsingularity constraint and also allows for weighting. The generalized solution can readily be obtained from the standard unweighted singular solution by transforming the observed improper correlation matrix in a suitable way.  相似文献   
7.
Correspondence analysis used complementary to loglinear analysis   总被引:1,自引:0,他引:1  
Loglinear analysis and correspondence analysis provide us with two different methods for the decomposition of contingency tables. In this paper we will show that there are cases in which these two techniques can be used complementary to each other. More specifically, we will show that often correspondence analysis can be viewed as providing a decomposition of the difference between two matrices, each following a specific loglinear model. Therefore, in these cases the correspondence analysis solution can be interpreted in terms of the difference between these loglinear models. A generalization of correspondence analysis, recently proposed by Escofier, will also be discussed. With this decomposition, which includes classical correspondence analysis as a special case, it is possible to use correspondence analysis complementary to loglinear analysis in more instances than those described for classical correspondence analysis. In this context correspondence analysis is used for the decomposition of the residuals of specific restricted loglinear models.  相似文献   
8.
The choice of constraints in correspondence analysis   总被引:2,自引:0,他引:2  
A discussion of alternative constraint systems has been lacking in the literature on correspondence analysis and related techniques. This paper reiterates earlier results that an explicit choice of constraints has to be made which can have important effects on the resulting scores. The paper also presents new results on dealing with missing data and probabilistic category assignment.I am most grateful to the following for their helpful comments. Arto Demirjian, Michael Greenacre, Michael Healy, Shizuhiko Nishisato, Roderick Mcdonald, and several anonymous referees.  相似文献   
9.
This paper discusses thecompatibility of the polychotomous Rasch model with dichotomization of the response continuum. It is argued that in the case of graded responses, the response categories presented to the subject are essentially an arbitrary polychotomization of the response continuum, ranging for example from total rejection or disagreement to total acceptance or agreement of an item or statement. Because of this arbitrariness, the measurement outcome should be independent of the specific polychotomization applied, for example, presenting a specific multicategory response format should not affect the measurement outcome. When such is the case, the original polychotomous model is called compatible with dichotomization.A distinction is made between polychotomization or dichotomization before the fact, that is, in constructing the response format, and polycho- or dichotomization after the fact, for example in dichotomizing existing graded response data.It is shown that, at least in case of dichotomization after-the-fact, the polychotomous Rasch model is not compatible with dichotomization, unless a rather special condition of the model parameters is met. Insofar as it may be argued that dichotomization before the fact is not essentially different from dichotomization after the fact, the value of the unidimensional polychotomous Rasch model is consequently questionable. The impact of our conclusion on related models is also discussed.  相似文献   
10.
A direct method in handling incomplete data in general covariance structural models is investigated. Asymptotic statistical properties of the generalized least squares method are developed. It is shown that this approach has very close relationships with the maximum likelihood approach. Iterative procedures for obtaining the generalized least squares estimates, the maximum likelihood estimates, as well as their standard error estimates are derived. Computer programs for the confirmatory factor analysis model are implemented. A longitudinal type data set is used as an example to illustrate the results.This research was supported in part by Research Grant DAD1070 from the U.S. Public Health Service. The author is indebted to anonymous reviewers for some very valuable suggestions. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号