首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   398篇
  免费   130篇
  国内免费   28篇
  2023年   5篇
  2022年   1篇
  2021年   9篇
  2020年   11篇
  2019年   10篇
  2018年   8篇
  2017年   19篇
  2016年   11篇
  2015年   12篇
  2014年   15篇
  2013年   36篇
  2012年   15篇
  2011年   21篇
  2010年   11篇
  2009年   24篇
  2008年   11篇
  2007年   15篇
  2006年   18篇
  2005年   26篇
  2004年   6篇
  2003年   12篇
  2002年   9篇
  2001年   11篇
  2000年   10篇
  1999年   8篇
  1998年   7篇
  1997年   11篇
  1996年   10篇
  1995年   5篇
  1994年   6篇
  1993年   8篇
  1992年   8篇
  1991年   16篇
  1990年   5篇
  1989年   10篇
  1988年   16篇
  1987年   9篇
  1986年   9篇
  1985年   12篇
  1984年   8篇
  1983年   6篇
  1982年   10篇
  1981年   3篇
  1980年   13篇
  1979年   15篇
  1978年   12篇
  1977年   10篇
  1976年   12篇
  1975年   1篇
排序方式: 共有556条查询结果,搜索用时 15 毫秒
1.
Canonical analysis of two convex polyhedral cones and applications   总被引:1,自引:0,他引:1  
Canonical analysis of two convex polyhedral cones consists in looking for two vectors (one in each cone) whose square cosine is a maximum. This paper presents new results about the properties of the optimal solution to this problem, and also discusses in detail the convergence of an alternating least squares algorithm. The set of scalings of an ordinal variable is a convex polyhedral cone, which thus plays an important role in optimal scaling methods for the analysis of ordinal data. Monotone analysis of variance, and correspondence analysis subject to an ordinal constraint on one of the factors are both canonical analyses of a convex polyhedral cone and a subspace. Optimal multiple regression of a dependent ordinal variable on a set of independent ordinal variables is a canonical analysis of two convex polyhedral cones as long as the signs of the regression coefficients are given. We discuss these three situations and illustrate them by examples.  相似文献   
2.
This paper discusses least squares methods for fitting a reformulation of the general Euclidean model for the external analysis of preference data. The reformulated subject weights refer to a common set of reference vectors for all subjects and hence are comparable across subjects. If the rotation of the stimulus space is fixed, the subject weight estimates in the model are uniquely determined. Weight estimates can be guaranteed nonnegative. While the reformulation is a metric model for single stimulus data, the paper briefly discusses extensions to nonmetric, pairwise, and logistic models. The reformulated model is less general than Carroll's earlier formulation.The author is grateful to Christopher J. Nachtsheim for his helpful suggestions.  相似文献   
3.
Six pigeons were trained to discriminate between two intensities of white light in a symbolic matching-to-sample procedure. These stimuli were then used to signal which schedule was available on the main key in a switching-key concurrent schedule. The concurrent schedules led to a symbolic matching-to-sample phase in which the subject identified the concurrent schedule to which it last responded before a reinforcer could be obtained. The concurrent schedules were varied across conditions. Discriminability, measured during the symbolic matching-to-sample performance, was high throughout and did not differ across the two procedures. Performance in the concurrent schedules was like that typically obtained using these schedules. Delays were then arranged between completion of the concurrent schedules and presentations of the symbolic matching-to-sample phase. A series of conditions with an intervening delay of 10 s showed that both concurrent-schedule performance and symbolic matching-to-sample performance were affected by the delay in a similar way; that is, choice responding was closer to indifference.  相似文献   
4.
This paper suggests a method to supplant missing categorical data by reasonable replacements. These replacements will maximize the consistency of the completed data as measured by Guttman's squared correlation ratio. The text outlines a solution of the optimization problem, describes relationships with the relevant psychometric theory, and studies some properties of the method in detail. The main result is that the average correlation should be at least 0.50 before the method becomes practical. At that point, the technique gives reasonable results up to 10–15% missing data.We thank Anneke Bloemhoff of NIPG-TNO for compiling and making the Dutch Life Style Survey data available to use, and Chantal Houée and Thérèse Bardaine, IUT, Vannes, France, exchange students under the COMETT program of the EC, for computational assistance. We also thank Donald Rubin, the Editors and several anonymous reviewers for constructive suggestions.  相似文献   
5.
The paper derives sufficient conditions for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis.  相似文献   
6.
An Extended Two-Way Euclidean Multidimensional Scaling (MDS) model which assumes both common and specific dimensions is described and contrasted with the standard (Two-Way) MDS model. In this Extended Two-Way Euclidean model then stimuli (or other objects) are assumed to be characterized by coordinates onR common dimensions. In addition each stimulus is assumed to have a dimension (or dimensions) specific to it alone. The overall distance between objecti and objectj then is defined as the square root of the ordinary squared Euclidean distance plus terms denoting the specificity of each object. The specificity,s j , can be thought of as the sum of squares of coordinates on those dimensions specific to objecti, all of which have nonzero coordinatesonly for objecti. (In practice, we may think of there being just one such specific dimension for each object, as this situation is mathematically indistinguishable from the case in which there are more than one.)We further assume that ij =F(d ij ) +e ij where ij is the proximity value (e.g., similarity or dissimilarity) of objectsi andj,d ij is the extended Euclidean distance defined above, whilee ij is an error term assumed i.i.d.N(0, 2).F is assumed either a linear function (in the metric case) or a monotone spline of specified form (in the quasi-nonmetric case). A numerical procedure alternating a modified Newton-Raphson algorithm with an algorithm for fitting an optimal monotone spline (or linear function) is used to secure maximum likelihood estimates of the paramstatistics) can be used to test hypotheses about the number of common dimensions, and/or the existence of specific (in addition toR common) dimensions.This approach is illustrated with applications to both artificial data and real data on judged similarity of nations.  相似文献   
7.
In two experiments, rats were trained to deposit ball bearings down a hole in the floor, using an algorithmic version of shaping. The experimenter coded responses expected to be precursors of the target response, ball bearing deposit; a computer program reinforced these responses, or not, according to an algorithm that mimicked the processes thought to occur in conventional shaping. In the first experiment, 8 of 10 rats were successfully shaped; in the second, 5 of 5 were successfully shaped, and the median number of sessions required was the same as for a control group trained using conventional shaping. In both experiments, “misbehavior,” that is, excessive handling and chewing of the ball bearings, was observed, and when the algorithmic shaping procedure was used, misbehavior could be shown to occur in spite of reduced reinforcement for the responses involved.  相似文献   
8.
Correspondence analysis used complementary to loglinear analysis   总被引:1,自引:0,他引:1  
Loglinear analysis and correspondence analysis provide us with two different methods for the decomposition of contingency tables. In this paper we will show that there are cases in which these two techniques can be used complementary to each other. More specifically, we will show that often correspondence analysis can be viewed as providing a decomposition of the difference between two matrices, each following a specific loglinear model. Therefore, in these cases the correspondence analysis solution can be interpreted in terms of the difference between these loglinear models. A generalization of correspondence analysis, recently proposed by Escofier, will also be discussed. With this decomposition, which includes classical correspondence analysis as a special case, it is possible to use correspondence analysis complementary to loglinear analysis in more instances than those described for classical correspondence analysis. In this context correspondence analysis is used for the decomposition of the residuals of specific restricted loglinear models.  相似文献   
9.
Three-way metric unfolding via alternating weighted least squares   总被引:6,自引:3,他引:3  
Three-way unfolding was developed by DeSarbo (1978) and reported in DeSarbo and Carroll (1980, 1981) as a new model to accommodate the analysis of two-mode three-way data (e.g., nonsymmetric proximities for stimulus objects collected over time) and three-mode, three-way data (e.g., subjects rendering preference judgments for various stimuli in different usage occasions or situations). This paper presents a revised objective function and new algorithm which attempt to prevent the common type of degenerate solutions encountered in typical unfolding analysis. We begin with an introduction of the problem and a review of three-way unfolding. The three-way unfolding model, weighted objective function, and new algorithm are presented. Monte Carlo work via a fractional factorial experimental design is described investigating the effect of several data and model factors on overall algorithm performance. Finally, three applications of the methodology are reported illustrating the flexibility and robustness of the procedure.We wish to thank the editor and reviewers for their insightful comments.  相似文献   
10.
The properties of nonmetric multidimensional scaling (NMDS) are explored by specifying statistical models, proving statistical consistency, and developing hypothesis testing procedures. Statistical models with errors in the dependent and independent variables are described for quantitative and qualitative data. For these models, statistical consistency often depends crucially upon how error enters the model and how data are collected and summarized (e.g., by means, medians, or rank statistics). A maximum likelihood estimator for NMDS is developed, and its relationship to the standard Shepard-Kruskal estimation method is described. This maximum likelihood framework is used to develop a method for testing the overall fit of the model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号