全文获取类型
收费全文 | 373篇 |
免费 | 41篇 |
国内免费 | 38篇 |
出版年
2023年 | 2篇 |
2022年 | 14篇 |
2021年 | 17篇 |
2020年 | 27篇 |
2019年 | 15篇 |
2018年 | 13篇 |
2017年 | 21篇 |
2016年 | 25篇 |
2015年 | 20篇 |
2014年 | 14篇 |
2013年 | 66篇 |
2012年 | 9篇 |
2011年 | 21篇 |
2010年 | 13篇 |
2009年 | 23篇 |
2008年 | 21篇 |
2007年 | 9篇 |
2006年 | 10篇 |
2005年 | 8篇 |
2004年 | 6篇 |
2003年 | 8篇 |
2002年 | 8篇 |
2001年 | 6篇 |
2000年 | 7篇 |
1999年 | 4篇 |
1998年 | 4篇 |
1997年 | 3篇 |
1996年 | 2篇 |
1995年 | 2篇 |
1994年 | 2篇 |
1993年 | 3篇 |
1992年 | 4篇 |
1991年 | 3篇 |
1990年 | 2篇 |
1989年 | 2篇 |
1988年 | 3篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1985年 | 5篇 |
1984年 | 6篇 |
1983年 | 4篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 6篇 |
1977年 | 7篇 |
排序方式: 共有452条查询结果,搜索用时 15 毫秒
151.
This paper considers a multivariate normal model with one of the component variables observable only in polytomous form. The maximum likelihood approach is used for estimation of the parameters in the model. The Newton-Raphson algorithm is implemented to obtain the solution of the problem. Examples based on real and simulated data are reported.The research of the first author was supported in part by a research grant (DA01070) from the US Public Health Service. We are indebted to the referees and the editor for some very valuable comments and suggestions. 相似文献
152.
Douglas B. Clarkson 《Psychometrika》1979,44(3):297-314
The jackknife by groups and modifications of the jackknife by groups are used to estimate standard errors of rotated factor loadings for selected populations in common factor model maximum likelihood factor analysis. Simulations are performed in whicht-statistics based upon these jackknife estimates of the standard errors are computed. The validity of thet-statistics and their associated confidence intervals is assessed. Methods are given through which the computational efficiency of the jackknife may be greatly enhanced in the factor analysis model.Computing assistance was obtained from the Health Sciences Computing Facility, UCLA, sponsored by NIH Special Research Resources Grant RR-3.The author wishes to thank his doctoral committee co-chairmen, Drs James W. Frane and Robert I. Jennrich, UCLA, for their contributions to this research. 相似文献
153.
154.
It is common practice to compare the computational power ofdifferent models of computation. For example, the recursivefunctions are strictly more powerful than the primitive recursivefunctions, because the latter are a proper subset of the former(which includes Ackermann's function). Side-by-side with this"containment" method of measuring power, it is also standardto base comparisons on "simulation". For example, one says thatthe (untyped) lambda calculus is as powerfulcomputationallyspeakingas the partial recursive functions, because thelambda calculus can simulate all partial recursive functionsby encoding the natural numbers as Church numerals. The problem is that unbridled use of these two distinct waysof comparing power allows one to show that some computationalmodels (sets of partial functions) are strictly stronger thanthemselves! We argue that a better definition is that modelA is strictly stronger than B if A can simulate B via some encoding,whereas B cannot simulate A under any encoding. We show thatwith this definition, too, the recursive functions are strictlystronger than the primitive recursive. We also prove that therecursive functions, partial recursive functions, and Turingmachines are "complete", in the sense that no injective encodingcan make them equivalent to any "hypercomputational" model.1 相似文献
155.
Plausibility has been implicated as playing a critical role in many cognitive phenomena from comprehension to problem solving. Yet, across cognitive science, plausibility is usually treated as an operationalized variable or metric rather than being explained or studied in itself. This article describes a new cognitive model of plausibility, the Plausibility Analysis Model (PAM), which is aimed at modeling human plausibility judgment. This model uses commonsense knowledge of concept-coherence to determine the degree of plausibility of a target scenario. In essence, a highly plausible scenario is one that fits prior knowledge well: with many different sources of corroboration, without complexity of explanation, and with minimal conjecture. A detailed simulation of empirical plausibility findings is reported, which shows a close correspondence between the model and human judgments. In addition, a sensitivity analysis demonstrates that PAM is robust in its operations. 相似文献
156.
Hale J 《Cognitive Science》2006,30(4):643-672
A word-by-word human sentence processing complexity metric is presented. This metric formalizes the intuition that comprehenders have more trouble on words contributing larger amounts of information about the syntactic structure of the sentence as a whole. The formalization is in terms of the conditional entropy of grammatical continuations, given the words that have been heard so far. To calculate the predictions of this metric, Wilson and Carroll's (1954) original entropy reduction idea is extended to infinite languages. This is demonstrated with a mildly context-sensitive language that includes relative clauses formed on a variety of grammatical relations across the Accessibility Hierarchy of Keenan and Comrie (1977). Predictions are derived that correlate significantly with repetition accuracy results obtained in a sentence-memory experiment (Keenan & Hawkins, 1987). 相似文献
157.
Recently, the regression extension of latent class analysis (RLCA) model has received much attention in the field of medical research. The basic RLCA model summarizes shared features of measured multiple indicators as an underlying categorical variable and incorporates the covariate information in modeling both latent class membership and multiple indicators themselves. To reduce complexity and enhance interpretability, one usually fixes the number of classes in a given RLCA. Often, goodness of fit methods comparing various estimated models are used as a criterion to select the number of classes. In this paper, we propose a new method that is based on an analogous method used in factor analysis and does not require repeated fitting. Two ideas with application to many settings other than ours are synthesized in deriving the method: a connection between latent class models and factor analysis, and techniques of covariate marginalization and elimination. A Monte Carlo simulation study is presented to evaluate the behavior of the selection procedure and compare to alternative approaches. Data from a study of how measured visual impairments affect older persons’ functioning are used for illustration.This work was supported by National Institute on Aging (NIA) Program Project P01-AG-10184-03. The author wishes to thank Dr. Karen Bandeen-Roche for her stimulating comments and helpful discussions, and Drs. Gary Rubin and Sheila West for kindly making the Salisbury Eye Evaluation data available. 相似文献
158.
159.
汉字识别的计算机模拟 总被引:3,自引:0,他引:3
本文总结了作者近年来在连结主义的理论框架下 ,用计算机模拟汉字识别的工作。文章分三部分。第一部分说明了计算机模拟与人工智能的关系。第二部分介绍了作者提出的两个模型 :汉字识别与命名的连结主义模型和基于语义的词汇判断的计算模型。两个模型分别成功地模拟了汉字识别中的频率效应、形声字读音中的规则效应、声旁效应、语义启动效应、语境与频率的交互作用等。第三部分讨论了模拟工作的意义、分布表征、学习算法等问题。研究表明 :认知的计算机模拟能验证人类认知实验的结果 ,对结果提出合理的解释 ,并能指导进一步的实验研究。 相似文献
160.
Thomas M. Reimers Michael D. Vance Rosemary J. Young 《Journal of applied behavior analysis》1995,28(2):231-232
We examined the effectiveness of simulation training to teach an adolescent male with Crohn disease to self-administer nasogastric tube insertion. Nasogastric tube insertion was taught using simulation training, after which self-insertion skills were assessed. Results across skill components indicated that this subject was able to self-administer insertion of the nasogastric tube. 相似文献