首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84篇
  免费   3篇
  2018年   3篇
  2017年   2篇
  2016年   2篇
  2014年   3篇
  2013年   4篇
  2012年   3篇
  2011年   6篇
  2010年   5篇
  2009年   6篇
  2008年   5篇
  2007年   4篇
  2006年   7篇
  2005年   4篇
  2004年   1篇
  2003年   5篇
  2002年   5篇
  2001年   5篇
  2000年   3篇
  1999年   8篇
  1998年   1篇
  1997年   2篇
  1996年   1篇
  1992年   1篇
  1990年   1篇
排序方式: 共有87条查询结果,搜索用时 31 毫秒
1.
2.
3.
Probabilities and polarity biases in conditional inference   总被引:15,自引:0,他引:15  
A probabilistic computational level model of conditional inference is proposed that can explain polarity biases in conditional inference (e.g., J. St. B. T. Evans, 1993). These biases are observed when J. St. B. T. Evans's (1972) negations paradigm is used in the conditional inference task. The model assumes that negations define higher probability categories than their affirmative counterparts (M. Oaksford & K. Stenning, 1992); for example, P(not-dog) > P(dog). This identification suggests that polarity biases are really a rational effect of high-probability categories. Three experiments revealed that, consistent with this probabilistic account, when high-probability categories are used instead of negations, a high-probability conclusion effect is observed. The relationships between the probabilistic model and other phenomena and other theories in conditional reasoning are discussed.  相似文献   
4.
Olivers CN  Chater N  Watson DG 《Psychological review》2004,111(1):242-60; author reply 261-73
P. A. van der Helm and E. L. J. Leeuwenberg (1996) outlined a holographic account of figural goodness of a perceptual stimulus. The theory is mathematically precise and can be applied to a broad spectrum of empirical data. The authors argue, however, that the account is inadequate on both theoretical and empirical grounds. The theoretical difficulties concern the internal consistency of the account and its reliance on unspecified auxiliary assumptions. The account also makes counterintuitive empirical predictions, which do not fit past data or the results of a series of new experimental studies.  相似文献   
5.
6.
Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception.  相似文献   
7.
We examined the learning process with 3 sets of stimuli that have identical symbolic structure but differ in appearance (meaningless letter strings, arrangements of geometric shapes, and sequences of cities). One hypothesis is that the learning process aims to encode symbolic regularity in the same way, largely regardless of appearance. Another is that different types of stimuli bias the learning process to operate in different ways. Using the experimental paradigm of artificial grammar learning, we provided a preliminary test of these hypotheses. In Experiments 1 and 2 we measured performance in terms of grammaticality and found no difference across the 3 sets of stimuli. In Experiment 3 we analyzed performance in terms of both grammaticality and chunk strength. Again we found no differences in performance. Our tentative conclusion is that the learning process aims to encode symbolic regularity independent of stimulus appearance.  相似文献   
8.
Abstract

Howard (1992) defines concepts as the information that a person has about a category, and argues for an eclectic theory of concepts on the basis of this definition. We argue that this definition is unacceptable and hence that eclecticism does not follow. First, the definition is circular as it stands. Secondly, when it is modified to avoid circularity, it implies conceptual holism, according to which concepts are not useful explanatory constructs in psychology. Thirdly, we argue that Howard's argument relies essentially on this unacceptable definition: alternative accounts of concepts, namely categorisational or representational views, do not support it. Having countered the argument for eclecticism, we then argue against it directly on methodological grounds.  相似文献   
9.
We address the problem of predicting how people will spontaneously divide into groups a set of novel items. This is a process akin to perceptual organization. We therefore employ the simplicity principle from perceptual organization to propose a simplicity model of unconstrained spontaneous grouping. The simplicity model predicts that people would prefer the categories for a set of novel items that provide the simplest encoding of these items. Classification predictions are derived from the model without information either about the number of categories sought or information about the distributional properties of the objects to be classified. These features of the simplicity model distinguish it from other models in unsupervised categorization (where, for example, the number of categories sought is determined via a free parameter), and we discuss how these computational differences are related to differences in modeling objectives. The predictions of the simplicity model are validated in four experiments. We also discuss the significance of simplicity in cognitive modeling more generally.  相似文献   
10.
Oaksford and Chater (2014 Oaksford, M., &; Chater, N. (2014). Probabilistic single function dual process theory and logic programming as approaches to non-monotonicity in human vs. artificial reasoning. Thinking and Reasoning, 20, 269295. doi:10.1080/13546783.2013.877401[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], Thinking and Reasoning, 20, 269–295) critiqued the logic programming (LP) approach to nonmonotonicity and proposed that a Bayesian probabilistic approach to conditional reasoning provided a more empirically adequate theory. The current paper is a reply to Stenning and van Lambalgen's rejoinder to this earlier paper entitled ‘Logic programming, probability, and two-system accounts of reasoning: a rejoinder to Oaksford and Chater’ (2016) in Thinking and Reasoning. It is argued that causation is basic in human cognition and that explaining how abnormality lists are created in LP requires causal models. Each specific rejoinder to the original critique is then addressed. While many areas of agreement are identified, with respect to the key differences, it is concluded the current evidence favours the Bayesian approach, at least for the moment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号