首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2037篇
  免费   197篇
  国内免费   221篇
  2024年   6篇
  2023年   40篇
  2022年   43篇
  2021年   78篇
  2020年   96篇
  2019年   92篇
  2018年   83篇
  2017年   99篇
  2016年   116篇
  2015年   73篇
  2014年   86篇
  2013年   270篇
  2012年   52篇
  2011年   92篇
  2010年   59篇
  2009年   95篇
  2008年   105篇
  2007年   110篇
  2006年   94篇
  2005年   110篇
  2004年   75篇
  2003年   78篇
  2002年   60篇
  2001年   51篇
  2000年   41篇
  1999年   40篇
  1998年   33篇
  1997年   31篇
  1996年   19篇
  1995年   19篇
  1994年   19篇
  1993年   22篇
  1992年   20篇
  1991年   15篇
  1990年   20篇
  1989年   15篇
  1988年   16篇
  1987年   10篇
  1986年   7篇
  1985年   7篇
  1984年   7篇
  1983年   7篇
  1982年   6篇
  1981年   8篇
  1980年   8篇
  1979年   5篇
  1978年   5篇
  1977年   6篇
  1976年   6篇
排序方式: 共有2455条查询结果,搜索用时 15 毫秒
961.
The method of finding the maximum likelihood estimates of the parameters in a multivariate normal model with some of the component variables observable only in polytomous form is developed. The main stratagem used is a reparameterization which converts the corresponding log likelihood function to an easily handled one. The maximum likelihood estimates are found by a Fletcher-Powell algorithm, and their standard error estimates are obtained from the information matrix. When the dimension of the random vector observable only in polytomous form is large, obtaining the maximum likelihood estimates is computationally rather labor expensive. Therefore, a more efficient method, the partition maximum likelihood method, is proposed. These estimation methods are demonstrated by real and simulated data, and are compared by means of a simulation study.  相似文献   
962.
In this note, we describe the iterative procedure introduced earlier by Goodman to calculate the maximum likelihood estimates of the parameters in latent structure analysis, and we provide here a simple and direct proof of the fact that the parameter estimates obtained with the iterative procedure cannot lie outside the allowed interval. Formann recently stated that Goodman's algorithm can yield parameter estimates that lie outside the allowed interval, and we prove in the present note that Formann's contention is incorrect.This research was supported in part by Research Contract No. NSF SOC 76-80389 from the Division of the Social Sciences of the National Science Foundation. The author is indebted to C. C. Clogg for helpful comments and for the numerical results reported here (see, e.g., Table 1).  相似文献   
963.
When comparing examinees to a control, the examiner usually does not know the probability of correctly classifying the examinees based on the number of items used and the number of examinees tested. Using ranking and selection techniques, a general framework is described for deriving a lower bound on this probability. We illustrate how these techniques can be applied to the binomial error model. New exact results are given for normal populations having unknown and unequal variances.The work upon which this publication is based was performed pursuant to a grant [Grant No. NIE-G-76-0083] with the National Institute of Education, Department of Health, Education and Welfare. Points of view or opinions stated do not necessarily represent official NIE position or policy.  相似文献   
964.
Goodman contributed to the theory of scaling by including a category of intrinsically unscalable respondents in addition to the usual scale-type respondents. However, his formulation permits only error-free responses by respondents from the scale types. This paper presents new scaling models which have the properties that: (1) respondents in the scale types are subject to response errors; (2) a test of significance can be constructed to assist in deciding on the necessity for including an intrinsically unscalable class in the model; and (3) when an intrinsically unscalable class is not needed to explain the data, the model reduces to a probabilistic, rather than to a deterministic, form. Three data sets are analyzed with the new models and are used to illustrate stages of hypothesis testing.  相似文献   
965.
A major research direction for ability measurement has been to identify the information-processes that are involved in solving test items through mathematical modeling of item difficulty. However, this research has had limited impact on ability measurement, since person parameters are not included in the process models. The current paper presents some multicomponent latent trait models for reproducing test performance from both item and person parameters on processing components. Components are identified from item subtasks, in which performance is a logistic function (i.e., Rasch model) of person and item parameters, and then are combined according to a mathematical model of processing on the composite item.The author would like to thank David Thissen for his invaluable insights concerning this model and an anonymous reviewer for his suggestion about the sample space for the model.This research was partially supported by National Institute of Education grant number NIE-6-7-0156 to Susan E. Whitely, principal investigator. However the opinions expressed herein do not necessarily reflect the position or policy of the National Institute of Education, and no official endorsement by the National Institute of Education should be referred. Part of this paper was presented at the annual meeting of thePsychometric Society, Monterey, California: June, 1979.  相似文献   
966.
This paper demonstrates the feasibility of using the penalty function method to estimate parameters that are subject to a set of functional constraints in covariance structure analysis. Both types of inequality and equality constraints are studied. The approaches of maximum likelihood and generalized least squares estimation are considered. A modified Scoring algorithm and a modified Gauss-Newton algorithm are implemented to produce the appropriate constrained estimates. The methodology is illustrated by its applications to Heywood cases in confirmatory factor analysis, quasi-Weiner simplex model, and multitrait-multimethod matrix analysis.The author is indebted to several anonymous reviewers for creative suggestions for improvement of this paper. Computer funding is provided by the Computer Services Centre, The Chinese University of Hong Kong.  相似文献   
967.
Hereditary nonpolyposis colorectal cancer (HNPCC) is characterized by a susceptibility to colorectal and extra-colonic cancers. Several guidelines exist for the identification of families suspected of having HNPCC, however these guidelines lack adequate sensitivity and specificity. In an attempt to improve accuracy for the detection of individuals with HNPCC, the Wijnen pre-test probability model (1998) and Myriad Genetics Laboratory prevalence table (2004) were developed. Here we evaluate the Wijnen model and Myriad table at predicting the presence of a mutation in individuals undergoing genetic testing for HNPCC. Forty-nine patients who had undergone genetic testing for germline mutations in hMLH1 and/or hMSH2 were part of our analysis. Our results revealed that the revised Bethesda guidelines performed with the highest sensitivity for germline mutations (94.4%), however the specificity was low (12.9%). Using a 10.0% mutation probability threshold, the Wijnen model and Myriad table had sensitivities of 55.6 and 60.0%, respectively and specificities of 54.8 and 23.8%, respectively. The Wijnen model and Myriad table were poor predictors of mutation prevalence, which is shown by the areas underneath their corresponding receiver operator characteristic curves (0.616 and 0.400, respectively). The results of this study demonsrate that neither the Wijnen model nor the Myriad table are sensitive or specific enough to be used as the only indication when to offer genetic testing for HNPCC.  相似文献   
968.
It is common practice to compare the computational power ofdifferent models of computation. For example, the recursivefunctions are strictly more powerful than the primitive recursivefunctions, because the latter are a proper subset of the former(which includes Ackermann's function). Side-by-side with this"containment" method of measuring power, it is also standardto base comparisons on "simulation". For example, one says thatthe (untyped) lambda calculus is as powerful—computationallyspeaking—as the partial recursive functions, because thelambda calculus can simulate all partial recursive functionsby encoding the natural numbers as Church numerals. The problem is that unbridled use of these two distinct waysof comparing power allows one to show that some computationalmodels (sets of partial functions) are strictly stronger thanthemselves! We argue that a better definition is that modelA is strictly stronger than B if A can simulate B via some encoding,whereas B cannot simulate A under any encoding. We show thatwith this definition, too, the recursive functions are strictlystronger than the primitive recursive. We also prove that therecursive functions, partial recursive functions, and Turingmachines are "complete", in the sense that no injective encodingcan make them equivalent to any "hypercomputational" model.1  相似文献   
969.
We describe a way of modeling high-dimensional data vectors by using an unsupervised, nonlinear, multilayer neural network in which the activity of each neuron-like unit makes an additive contribution to a global energy score that indicates how surprised the network is by the data vector. The connection weights that determine how the activity of each unit depends on the activities in earlier layers are learned by minimizing the energy assigned to data vectors that are actually observed and maximizing the energy assigned to "confabulations" that are generated by perturbing an observed data vector in a direction that decreases its energy under the current model.  相似文献   
970.
An essential part of the human capacity for language is the ability to link conceptual or semantic representations with syntactic representations. On the basis of data from spontaneous production, suggested that young children acquire such links on a verb-by-verb basis, with little in the way of a general understanding of linguistic argument structure. Here, we suggest that a receptive understanding of argument structure--including principles linking syntax and conceptual/semantic structure--appears earlier. In a forced-choice pointing task we have shown that toddlers in the third year of life can map a single scene (involving a novel causative action paired with a novel verb) onto two distinct syntactic frames (transitive and intransitive). This suggests that even before toddlers begin generalizing argument structure in their own speech, they have some representation of conceptual/semantic categories, syntactic categories, and a system that links the two.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号