首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1084篇
  免费   63篇
  国内免费   58篇
  2024年   5篇
  2023年   29篇
  2022年   25篇
  2021年   30篇
  2020年   55篇
  2019年   45篇
  2018年   43篇
  2017年   42篇
  2016年   45篇
  2015年   21篇
  2014年   28篇
  2013年   91篇
  2012年   16篇
  2011年   20篇
  2010年   10篇
  2009年   21篇
  2008年   26篇
  2007年   60篇
  2006年   60篇
  2005年   57篇
  2004年   24篇
  2003年   38篇
  2002年   45篇
  2001年   36篇
  2000年   38篇
  1999年   23篇
  1998年   28篇
  1997年   20篇
  1996年   15篇
  1995年   27篇
  1994年   19篇
  1993年   21篇
  1992年   19篇
  1991年   8篇
  1990年   7篇
  1989年   15篇
  1988年   9篇
  1987年   6篇
  1986年   5篇
  1985年   5篇
  1984年   3篇
  1983年   6篇
  1982年   6篇
  1981年   3篇
  1980年   4篇
  1979年   15篇
  1978年   10篇
  1977年   9篇
  1976年   7篇
  1974年   2篇
排序方式: 共有1205条查询结果,搜索用时 15 毫秒
41.
We assume that a judge's task is to categorize each ofN subjects into one ofr known classes. The design of primary interest is employed if the judge is presented withs groups, each containingr subjects, such that each group of sizer consists of exactly one subject of each of ther types. The probability distribution for the total number of correct choices is developed and used to test the null hypothesis that the judge is guessing in favor of the alternative that he or she is operating at a better than chance level. The power of the procedure is shown to be superior to two other procedures which appear in the literature.The authors are grateful for the suggestions of the referees and for computer funding provided by the Northeast Regional Data Center at the University of Florida.  相似文献   
42.
The vast majority of existing multidimensional scaling (MDS) procedures devised for the analysis of paired comparison preference/choice judgments are typically based on either scalar product (i.e., vector) or unfolding (i.e., ideal-point) models. Such methods tend to ignore many of the essential components of microeconomic theory including convex indifference curves, constrained utility maximization, demand functions, et cetera. This paper presents a new stochastic MDS procedure called MICROSCALE that attempts to operationalize many of these traditional microeconomic concepts. First, we briefly review several existing MDS models that operate on paired comparisons data, noting the particular nature of the utility functions implied by each class of models. These utility assumptions are then directly contrasted to those of microeconomic theory. The new maximum likelihood based procedure, MICROSCALE, is presented, as well as the technical details of the estimation procedure. The results of a Monte Carlo analysis investigating the performance of the algorithm as a number of model, data, and error factors are experimentally manipulated are provided. Finally, an illustration in consumer psychology concerning a convenience sample of thirty consumers providing paired comparisons judgments for some fourteen brands of over-the-counter analgesics is discussed.  相似文献   
43.
Restricted multidimensional scaling models for asymmetric proximities   总被引:1,自引:0,他引:1  
Restricted multidimensional scaling models [Bentler & Weeks, 1978] allowing constraints on parameters, are extended to the case of asymmetric data. Separate functions are used to model the symmetric and antisymmetric parts of the data. The approach is also extended to the case in which data are presumed to be linearly related to squared distances. Examples of several models are provided, using journal citation data. Possible extensions of the models are considered. This research was supported in part by USPHS Grant 0A01070, P. M. Bentler, principal investigator, and NIMH Grant MH-24819, E. J. Anthony and J. Worland, principal investigators. The authors wish to thank E. W. Holman and several anonymous reviewers for their valuable suggestions concerning this research.  相似文献   
44.
When measuring the same variables on different occasions, two procedures for canonical analysis with stationary compositing weights are developed. The first, SUMCOV, maximizes the sum of the covariances of the canonical variates subject to norming constraints. The second, COLLIN, maximizes the largest root of the covariances of the canonical variates subject to norming constraints. A characterization theorem establishes a model building approach. Both methods are extended to allow for Cohort Sequential Designs. Finally a numerical illustration utilizing Nesselroade and Baltes data is presented.The authors wish to thank John Nesselroade for permitting us to use the data whose analysis we present.  相似文献   
45.
Percentage agreement measures of interobserver agreement or "reliability" have traditionally been used to summarize observer agreement from studies using interval recording, time-sampling, and trial-scoring data collection procedures. Recent articles disagree on whether to continue using these percentage agreement measures, and on which ones to use, and what to do about chance agreements if their use is continued. Much of the disagreement derives from the need to be reasonably certain we do not accept as evidence of true interobserver agreement those agreement levels which are substantially probable as a result of chance observer agreement. The various percentage agreement measures are shown to be adequate to this task, but easier ways are discussed. Tables are given to permit checking to see if obtained disagreements are unlikely due to chance. Particularly important is the discovery of a simple rule that, when met, makes the tables unnecessary. If reliability checks using 50 or more observation occasions produce 10% or fewer disagreements, for behavior rates from 10% through 90%, the agreement achieved is quite improbably the result of chance agreement.  相似文献   
46.
Synthetic data are used to examine how well axiomatic and numerical conjoint measurement methods, individually and comparatively, recover simple polynomial generators in three dimensions. The study illustrates extensions of numerical conjoint measurement (NCM) to identify and model distributive and dual-distributive, in addition to the usual additive, data structures. It was found that while minimum STRESS was the criterion of fit, another statistic, predictive capability, provided a better diagnosis of the known generating model. That NCM methods were able to better identify generating models conflicts with Krantz and Tversky's assertion that, in general, the direct axiom tests provide a more powerful diagnostic test between alternative composition rules than does evaluation of numerical correspondence. For all methods, dual-distributive models are most difficult to recover, while consistent with past studies, the additive model is the most robust of the fitted models.Douglas Emery is now at the Krannert Graduate School of Management, Purdue University, West Lafayette, IN, on leave from the University of Calgary.  相似文献   
47.
A two-step weighted least squares estimator for multiple factor analysis of dichotomized variables is discussed. The estimator is based on the first and second order joint probabilities. Asymptotic standard errors and a model test are obtained by applying the Jackknife procedure.  相似文献   
48.
49.
50.
This paper develops a novel psycholinguistic parser and tests it against experimental and corpus reading data. The parser builds on the recent research into memory structures, which argues that memory retrieval is content-addressable and cue-based. It is shown that the theory of cue-based memory systems can be combined with transition-based parsing to produce a parser that, when combined with the cognitive architecture ACT-R, can model reading and predict online behavioral measures (reading times and regressions). The parser's modeling capacities are tested against self-paced reading experimental data (Grodner & Gibson, 2005), eye-tracking experimental data (Staub, 2011), and a self-paced reading corpus (Futrell et al., 2018).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号