首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1161篇
  免费   86篇
  国内免费   61篇
  1308篇
  2024年   1篇
  2023年   18篇
  2022年   14篇
  2021年   40篇
  2020年   45篇
  2019年   43篇
  2018年   41篇
  2017年   53篇
  2016年   63篇
  2015年   36篇
  2014年   44篇
  2013年   158篇
  2012年   21篇
  2011年   54篇
  2010年   33篇
  2009年   59篇
  2008年   66篇
  2007年   53篇
  2006年   47篇
  2005年   52篇
  2004年   40篇
  2003年   39篇
  2002年   27篇
  2001年   25篇
  2000年   27篇
  1999年   28篇
  1998年   15篇
  1997年   24篇
  1996年   11篇
  1995年   9篇
  1994年   12篇
  1993年   14篇
  1992年   9篇
  1991年   11篇
  1990年   11篇
  1989年   9篇
  1988年   9篇
  1987年   6篇
  1986年   6篇
  1985年   3篇
  1984年   4篇
  1983年   2篇
  1982年   3篇
  1981年   5篇
  1980年   5篇
  1979年   4篇
  1978年   2篇
  1977年   3篇
  1976年   4篇
排序方式: 共有1308条查询结果,搜索用时 15 毫秒
81.
When we try to identify causal relationships, how strong do we expect that relationship to be? Bayesian models of causal induction rely on assumptions regarding people’s a priori beliefs about causal systems, with recent research focusing on people’s expectations about the strength of causes. These expectations are expressed in terms of prior probability distributions. While proposals about the form of such prior distributions have been made previously, many different distributions are possible, making it difficult to test such proposals exhaustively. In Experiment 1 we used iterated learning—a method in which participants make inferences about data generated based on their own responses in previous trials—to estimate participants’ prior beliefs about the strengths of causes. This method produced estimated prior distributions that were quite different from those previously proposed in the literature. Experiment 2 collected a large set of human judgments on the strength of causal relationships to be used as a benchmark for evaluating different models, using stimuli that cover a wider and more systematic set of contingencies than previous research. Using these judgments, we evaluated the predictions of various Bayesian models. The Bayesian model with priors estimated via iterated learning compared favorably against the others. Experiment 3 estimated participants’ prior beliefs concerning different causal systems, revealing key similarities in their expectations across diverse scenarios.  相似文献   
82.
The Asymptotic Classification Theory of Cognitive Diagnosis (Chiu et al., 2009, Psychometrika, 74, 633–665) determined the conditions that cognitive diagnosis models must satisfy so that the correct assignment of examinees to proficiency classes is guaranteed when non‐parametric classification methods are used. These conditions have only been proven for the Deterministic Input Noisy Output AND gate model. For other cognitive diagnosis models, no theoretical legitimization exists for using non‐parametric classification techniques for assigning examinees to proficiency classes. The specific statistical properties of different cognitive diagnosis models require tailored proofs of the conditions of the Asymptotic Classification Theory of Cognitive Diagnosis for each individual model – a tedious undertaking in light of the numerous models presented in the literature. In this paper a different way is presented to address this task. The unified mathematical framework of general cognitive diagnosis models is used as a theoretical basis for a general proof that under mild regularity conditions any cognitive diagnosis model is covered by the Asymptotic Classification Theory of Cognitive Diagnosis.  相似文献   
83.
84.
85.
The development in the interface of smart devices has lead to voice interactive systems. An additional step in this direction is to enable the devices to recognize the speaker. But this is a challenging task because the interaction involves short duration speech utterances. The traditional Gaussian mixture models (GMM) based systems have achieved satisfactory results for speaker recognition only when the speech lengths are sufficiently long. The current state-of-the-art method utilizes i-vector based approach using a GMM based universal background model (GMM-UBM). It prepares an i-vector speaker model from a speaker’s enrollment data and uses it to recognize any new test speech. In this work, we propose a multi-model i-vector system for short speech lengths. We use an open database THUYG-20 for the analysis and development of short speech speaker verification and identification system. By using an optimum set of mel-frequency cepstrum coefficients (MFCC) based features we are able to achieve an equal error rate (EER) of 3.21% as compared to the previous benchmark score of EER 4.01% on the THUYG-20 database. Experiments are conducted for speech lengths as short as 0.25 s and the results are presented. The proposed method shows improvement as compared to the current i-vector based approach for shorter speech lengths. We are able to achieve improvement of around 28% even for 0.25 s speech samples. We also prepared and tested the proposed approach on our own database with 2500 speech recordings in English language consisting of actual short speech commands used in any voice interactive system.  相似文献   
86.
采用实时窗口阅读技术,探讨文本阅读中角色目标在情境模型空间维度非线索更新中的作用。实验一结果表明,阅读中。如果文本叙述中的角色在一个目标指引下展开活动.那么可以促进相应空间设置内的物体的更新。实验二考察了不同性质的角色目标在空问信息加工中的作用,结果发现,相对于已经达成的目标,未达成的目标对情境模型空间维度非线索更新的影响更大。  相似文献   
87.
One of the main objectives of many empirical studies in the social and behavioral sciences is to assess the causal effect of a treatment or intervention on the occurrence of a certain event. The randomized controlled trial is generally considered the gold standard to evaluate such causal effects. However, for ethical or practical reasons, social scientists are often bound to the use of nonexperimental, observational designs. When the treatment and control group are different with regard to variables that are related to the outcome, this may induce the problem of confounding. A variety of statistical techniques, such as regression, matching, and subclassification, is now available and routinely used to adjust for confounding due to measured variables. However, these techniques are not appropriate for dealing with time-varying confounding, which arises in situations where the treatment or intervention can be received at multiple timepoints. In this article, we explain the use of marginal structural models and inverse probability weighting to control for time-varying confounding in observational studies. We illustrate the approach with an empirical example of grade retention effects on mathematics development throughout primary school.  相似文献   
88.
89.
90.
Traditionally, multinomial processing tree (MPT) models are applied to groups of homogeneous participants, where all participants within a group are assumed to have identical MPT model parameter values. This assumption is unreasonable when MPT models are used for clinical assessment, and it often may be suspect for applications to ordinary psychological experiments. One method for dealing with parameter variability is to incorporate random effects assumptions into a model. This is achieved by assuming that participants’ parameters are drawn independently from some specified multivariate hyperdistribution. In this paper we explore the assumption that the hyperdistribution consists of independent beta distributions, one for each MPT model parameter. These beta-MPT models are ‘hierarchical models’, and their statistical inference is different from the usual approaches based on data aggregated over participants. The paper provides both classical (frequentist) and hierarchical Bayesian approaches to statistical inference for beta-MPT models. In simple cases the likelihood function can be obtained analytically; however, for more complex cases, Markov Chain Monte Carlo algorithms are constructed to assist both approaches to inference. Examples based on clinical assessment studies are provided to demonstrate the advantages of hierarchical MPT models over aggregate analysis in the presence of individual differences.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号