首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   91篇
  免费   9篇
  国内免费   2篇
  2021年   3篇
  2020年   2篇
  2019年   4篇
  2018年   2篇
  2017年   3篇
  2016年   6篇
  2015年   4篇
  2014年   9篇
  2013年   19篇
  2012年   1篇
  2011年   4篇
  2010年   1篇
  2009年   7篇
  2008年   9篇
  2007年   8篇
  2006年   4篇
  2005年   1篇
  2004年   2篇
  2002年   5篇
  2000年   1篇
  1993年   1篇
  1987年   2篇
  1986年   1篇
  1982年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有102条查询结果,搜索用时 15 毫秒
61.
A series of five experiments investigated the extent of subliminal processing of negation. Participants were presented with a subliminal instruction to either pick or not pick an accompanying noun, followed by a choice of two nouns. By employing subjective measures to determine individual thresholds of subliminal priming, the results of these studies indicated that participants were able to identify the correct noun of the pair – even when the correct noun was specified by negation. Furthermore, using a grey-scale contrast method of masking, Experiment 5 confirmed that these priming effects were evidenced in the absence of partial awareness, and without the effect being attributed to the retrieval of stimulus–response links established during conscious rehearsal.  相似文献   
62.
The pseudodiagnosticity task has been used as an example of the tendency on the part of participants to incorrectly assess Bayesian constraints in assessing data, and as a failure to consider alternative hypotheses in a probabilistic inference task. In the task, participants are given one value, the anchor value, corresponding to P(D1|H) and may choose one other value, either P(D1|¬!H), P(D2|H), or P(D2|not;!H). Most participants select P(D2|H), or P(D2|¬!H) which have been considered inappropriate (and called pseudodiagnostic) because only P(D1|¬!H) allows use of Bayes' theorem. We present a new analysis based on probability intervals and show that selection of either P(D2|H), or P(D2|¬!H) is in fact pseudodiagnostic, whereas choice of P(D1|¬!H) is diagnostic. Our analysis shows that choice of the pseudodiagnostic values actually increases uncertainty regarding the posterior probability of H, supporting the original interpretation of the experimental findings on the pseudodiagnosticity task. The argument illuminates the general proposition that evolutionarily adaptive heuristics for Bayesian inference can be misled in some task situations.  相似文献   
63.
ABSTRACT

Statistical learning refers to the extraction of probabilistic relationships between stimuli and is increasingly used as a method to understand learning processes. However, numerous cognitive processes are sensitive to the statistical relationships between stimuli and any one measure of learning may conflate these processes; to date little research has focused on differentiating these processes. To understand how multiple processes underlie statistical learning, here we compared, within the same study, operational measures of learning from different tasks that may be differentially sensitive to these processes. In Experiment 1, participants were visually exposed to temporal regularities embedded in a stream of shapes. Their task was to periodically detect whether a shape, whose contrast was staircased to a threshold level, was present or absent. Afterwards, they completed a search task, where statistically predictable shapes were found more quickly. We used the search task to label shape pairs as “learned” or “non-learned”, and then used these labels to analyse the detection task. We found a dissociation between learning on the search task and the detection task where only non-learned pairs showed learning effects in the detection task. This finding was replicated in further experiments with recognition memory (Experiment 2) and associative learning tasks (Experiment 3). Taken together, these findings are consistent with the view that statistical learning may comprise a family of processes that can produce dissociable effects on different aspects of behaviour.  相似文献   
64.
A rasch model for continuous ratings   总被引:1,自引:0,他引:1  
Hans Müller 《Psychometrika》1987,52(2):165-181
A unidimensional latent trait model for continuous ratings is developed. This model is an extension of Andrich's rating formulation which assumes that the response process at latent thresholds is governed by the dichotomous Rasch model. Item characteristic functions and information functions are used to illustrate that the model takes ceiling and floor effects into account. Both the dichotomous Rasch model and a linear model with normally distributed error can be derived as limiting cases. The separability of the structural and incidental parameters is demonstrated and a procedure for estimating the parameters is outlined.  相似文献   
65.
Four experiments are reported investigating recognition of emotional expressions in very briefly presented facial stimulus. The faces were backwardly masked by neutral facial displays and recognition of facial expressions was analyzed as a function of the manipulation of different parameters in the masking procedure. The main conclusion was that stimulus onset asynchrony between target and mask proved to be the principal factor influencing recognition of the masked expressions. In general, confident recognitions of facial expressions required about 100–150 msec, with shorter time for happy than for angry expressions. The manipulation of the duration of both the target and the mask, by itself, had only minimal effects.  相似文献   
66.
An elaboration of a psychometric model for rated data, which belongs to the class of Rasch models, is shown to provide a model with two parameters, one characterising location and one characterising dispersion. The later parameter, derived from the idea of a unit of scale, is also shown to reflect the shape of rating distributions, ranging from unimodal, through uniform, and then to U-shaped distributions. A brief case is made that when a rating distribution is treated as a random error distribution, then the distribution should be unimodal.  相似文献   
67.
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high‐order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high‐quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non‐connectionist probabilistic models (n‐grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain.  相似文献   
68.
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability.  相似文献   
69.
Staffan Angere 《Synthese》2007,157(3):321-335
The impossibility results of Bovens and Hartmann (2003, Bayesian epistemology. Oxford: Clarendon Press) and Olsson (2005, Against coherence: Truth, probability and justification. Oxford: Oxford University Press.) show that the link between coherence and probability is not as strong as some have supposed. This paper is an attempt to bring out a way in which coherence reasoning nevertheless can be justified, based on the idea that, even if it does not provide an infallible guide to probability, it can give us an indication thereof. It is further shown that this actually is the case, for several of the coherence measures discussed in the literature so far. We also discuss how this affects the possibility to use coherence as a means of epistemic justification.  相似文献   
70.
Griffiths TL  Tenenbaum JB 《Cognition》2007,103(2):180-226
People's reactions to coincidences are often cited as an illustration of the irrationality of human reasoning about chance. We argue that coincidences may be better understood in terms of rational statistical inference, based on their functional role in processes of causal discovery and theory revision. We present a formal definition of coincidences in the context of a Bayesian framework for causal induction: a coincidence is an event that provides support for an alternative to a currently favored causal theory, but not necessarily enough support to accept that alternative in light of its low prior probability. We test the qualitative and quantitative predictions of this account through a series of experiments that examine the transition from coincidence to evidence, the correspondence between the strength of coincidences and the statistical support for causal structure, and the relationship between causes and coincidences. Our results indicate that people can accurately assess the strength of coincidences, suggesting that irrational conclusions drawn from coincidences are the consequence of overestimation of the plausibility of novel causal forces. We discuss the implications of our account for understanding the role of coincidences in theory change.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号