首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   171707篇
  免费   7845篇
  国内免费   156篇
  2021年   1518篇
  2020年   2825篇
  2019年   3494篇
  2018年   3549篇
  2017年   3995篇
  2016年   4689篇
  2015年   3992篇
  2014年   4864篇
  2013年   23907篇
  2012年   4625篇
  2011年   3745篇
  2010年   3934篇
  2009年   4860篇
  2008年   3927篇
  2007年   3418篇
  2006年   4086篇
  2005年   4042篇
  2004年   3585篇
  2003年   3216篇
  2002年   3027篇
  2001年   3234篇
  2000年   3073篇
  1999年   3074篇
  1998年   2874篇
  1997年   2715篇
  1996年   2609篇
  1995年   2452篇
  1994年   2413篇
  1993年   2368篇
  1992年   2512篇
  1991年   2353篇
  1990年   2218篇
  1989年   2144篇
  1988年   2107篇
  1987年   2137篇
  1986年   2120篇
  1985年   2337篇
  1984年   2473篇
  1983年   2303篇
  1982年   2377篇
  1981年   2345篇
  1980年   2168篇
  1979年   2075篇
  1978年   2164篇
  1977年   2134篇
  1976年   1927篇
  1975年   1930篇
  1974年   1993篇
  1973年   1791篇
  1972年   1425篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
This paper suggests a method to supplant missing categorical data by reasonable replacements. These replacements will maximize the consistency of the completed data as measured by Guttman's squared correlation ratio. The text outlines a solution of the optimization problem, describes relationships with the relevant psychometric theory, and studies some properties of the method in detail. The main result is that the average correlation should be at least 0.50 before the method becomes practical. At that point, the technique gives reasonable results up to 10–15% missing data.We thank Anneke Bloemhoff of NIPG-TNO for compiling and making the Dutch Life Style Survey data available to use, and Chantal Houée and Thérèse Bardaine, IUT, Vannes, France, exchange students under the COMETT program of the EC, for computational assistance. We also thank Donald Rubin, the Editors and several anonymous reviewers for constructive suggestions.  相似文献   
992.
993.
994.
Deborah G. Mayo 《Synthese》1992,90(2):233-262
I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a deeper understanding of NPT tests than their typical formulation as accept-reject routines, against which criticisms of NPT are really directed. The Pearsonian view that emerges suggests how NPT tests may avoid these criticisms while still retaining what is central to these methods: the control of error probabilities.A portion of this research was carried out during tenure of a National Endowment for the Humanities Summer Stipend Fellowship; I gratefully acknowledge that support. A version of this paper was presented at the 1987 meeting of the Society for Exact Philosophy. This paper benefited from discussions and communications with George Barnard and Isaac Levi. I thank Harlan Miller for helpful comments on earlier drafts.  相似文献   
995.
John Pais 《Studia Logica》1992,51(2):279-316
The properties of belief revision operators are known to have an informal semantics which relates them to the axioms of conditional logic. The purpose of this paper is to make this connection precise via the model theory of conditional logic. A semantics for conditional logic is presented, which is expressed in terms of algebraic models constructed ultimately out of revision operators. In addition, it is shown that each algebraic model determines both a revision operator and a logic, that are related by virtue of the stable Ramsey test.The author is grateful for a correction and several other valuable suggestions of two anonymous referees. This work was supported by the McDonnell Douglas Independent Research and Development program.  相似文献   
996.
We introduce two new belief revision axioms: partial monotonicity and consequence correctness. We show that partial monotonicity is consistent with but independent of the full set of axioms for a Gärdenfors belief revision sytem. In contrast to the Gärdenfors inconsistency results for certain monotonicity principles, we use partial monotonicity to inform a consistent formalization of the Ramsey test within a belief revision system extended by a conditional operator. We take this to be a technical dissolution of the well-known Gärdenfors dilemma.In addition, we present the consequential correctness axiom as a new measure of minimal revision in terms of the deductive core of a proposition whose support we wish to excise. We survey several syntactic and semantic belief revision systems and evaluate them according to both the Gärdenfors axioms and our new axioms. Furthermore, our algebraic characterization of semantic revision systems provides a useful technical device for analysis and comparison, which we illustrate with several new proofs.Finally, we have a new inconsistency result, which is dual to the Gärdenfors inconsistency results. Any elementary belief revision system that is consequentially correct must violate the Gärdenfors axiom of strong boundedness (K*8), which we characterize as yet another monotonicity condition.This work was supported by the McDonnell Douglas Independent Research and Development program.  相似文献   
997.
Binary programming models are presented to generate parallel tests from an itembank. The parallel tests are created to match item for item an existing seed test and match user supplied taxonomic specifications. The taxonomic specifications may be either obtained from the seed test or from some other user requirement. An algorithm is presented along with computational results to indicate the overall efficiency of the process. Empirical findings based on an itembank for the Arithmetic Reasoning section of the Armed Services Vocational Aptitude Battery are given.The Office of Naval Research, Program in Cognitive Science, N00014-87-C-0696 partially supported the work of Douglas H. Jones. The Rutgers Research Resource Committee of the Graduate School of Management partially supported the work of Douglas H. Jones and Ing-Long Wu. A Thomas and Betts research fellowship partially supported the work of Ing-Long Wu. The Human Resources Laboratory, United States Air Force, partially supported the work of Ronald Armstrong. The authors benefited from conversations with Dr. Wayne Shore, Operational Technologies, San Antonio, Texas. The order of authors' names is alphabetical and denotes equal authorship.  相似文献   
998.
30 subjects participated in a discrimination experiment learning face-letter associations under four rotation conditions (45 degrees, 90 degrees, 135 degrees, 180 degrees). Under each condition two thirds of the faces were presented twice, upright and rotated away from the vertical; the remaining faces were presented once, upright or rotated. Learning is described by a joint Markov model: For faces that are presented twice it assumes a separate association and encoding process (two-stage-model), for faces that are presented once it assumes an association process (all-or-none-model). The Markov model fits the data for all four rotation conditions. The angle of rotation does not affect learning for faces that are presented once. For faces that are presented twice it influences both the association and the encoding process. For the angles employed, the effect of rotation can be approximated linearly. The results suggest that the encoding of a rotated face differs increasingly from an upright face as a function of these angles of rotation. This confirms analogous conclusions from mental rotation experiments.  相似文献   
999.
The purpose of the two experiments reported here was to observe the effects of degree of learning, interpolated tests, and retention interval, primarily on the rate of forgetting of a list of words, and secondarily on hypermnesia for those words. In the first experiment, all the subjects had one study trial on a list of 20 common words, followed by two tests of recall. Half of the subjects had further study and test trials until they had learned the words to a criterion of three correct consecutive recalls. Two days later, half of the subjects under each learning condition returned for four retention tests, and 16 days later, all the subjects returned for four tests. Experiment 2 was similar, except that all the subjects had at least three study trials followed by four recall tests on Day 1, intermediate tests were given 2 or 7 days later, and they all had final tests 14 days later. The results showed that rate of forgetting was attenuated by an additional intermediate set of tests but not by criterion learning. Hypermnesia was generally found over the tests that were given after a retention interval of 2 or more days. The best predictor of the amount of hypermnesia over a set of tests was the difference between overall cumulative recall and net recall on the first test of the set.  相似文献   
1000.
In the present study, we examined letter detection in very frequent function-word sequences. It has been claimed that such sequences are processed in a unitized manner, thus preempting access to their constituent letters. In contrast, we showed that letter detection in the words for and the (1) was no more difficult when the words appeared in adjacent locations in a sentence (familiar) than when they appeared apart (less familiar sequence) and (2) was contingent upon the words' syntactic roles within the phrase. Thus, letter detection in for was easier when the sequence was separated by a clause boundary than when the words were part of the same clause. The advantage derived from clause separation was strongest when a comma divided clauses. These results challenge the unitization account of the "missing-letter" effect in common phrases and support a position where this phenomenon is seen to reflect the extraction of phrase structure during reading.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号