首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
A widespread assumption in the contemporary discussion of probabilistic models of cognition, often attributed to the Bayesian program, is that inference is optimal when the observer's priors match the true priors in the world—the actual “statistics of the environment.” But in fact the idea of a “true” prior plays no role in traditional Bayesian philosophy, which regards probability as a quantification of belief, not an objective characteristic of the world. In this paper I discuss the significance of the traditional Bayesian epistemic view of probability and its mismatch with the more objectivist assumptions about probability that are widely held in contemporary cognitive science. I then introduce a novel mathematical framework, the observer lattice, that aims to clarify this issue while avoiding philosophically tendentious assumptions. The mathematical argument shows that even if we assume that “ground truth” probabilities actually do exist, there is no objective way to tell what they are. Different observers, conditioning on different information, will inevitably have different probability estimates, and there is no general procedure to determine which one is right. The argument sheds light on the use of probabilistic models in cognitive science, and in particular on what exactly it means for the mind to be “tuned” to its environment.  相似文献   

2.
In the present work, most relevant evidence in causal learning literature is reviewed and a general cognitive architecture based on the available corpus of experimental data is proposed. However, contrary to algorithms formulated in the Bayesian nets framework, such architecture is not assumed to optimise the usefulness of the available information in order to induce the underlying causal structure as a whole. Instead, human reasoners seem to rely heavily on local clues and previous knowledge to discriminate between spurious and truly causal covariations, and piece those relations together only when they are demanded to do so. Bayesian networks and AI algorithms for causal inference are nonetheless considered valuable tools to identify the main computational goals of causal induction processes and to define the problems any intelligent causal inference system must solve.  相似文献   

3.
Algorithms for approximate Bayesian inference, such as those based on sampling (i.e., Monte Carlo methods), provide a natural source of models of how people may deal with uncertainty with limited cognitive resources. Here, we consider the idea that individual differences in working memory capacity (WMC) may be usefully modeled in terms of the number of samples, or “particles,” available to perform inference. To test this idea, we focus on two recent experiments that report positive associations between WMC and two distinct aspects of categorization performance: the ability to learn novel categories, and the ability to switch between different categorization strategies (“knowledge restructuring”). In favor of the idea of modeling WMC as a number of particles, we show that a single model can reproduce both experimental results by varying the number of particles—increasing the number of particles leads to both faster category learning and improved strategy‐switching. Furthermore, when we fit the model to individual participants, we found a positive association between WMC and best‐fit number of particles for strategy switching. However, no association between WMC and best‐fit number of particles was found for category learning. These results are discussed in the context of the general challenge of disentangling the contributions of different potential sources of behavioral variability.  相似文献   

4.
Human vision supports social perception by efficiently detecting agents and extracting rich information about their actions, goals, and intentions. Here, we explore the cognitive architecture of perceived animacy by constructing Bayesian models that integrate domain‐specific hypotheses of social agency with domain‐general cognitive constraints on sensory, memory, and attentional processing. Our model posits that perceived animacy combines a bottom–up, feature‐based, parallel search for goal‐directed movements with a top–down selection process for intent inference. The interaction of these architecturally distinct processes makes perceived animacy fast, flexible, and yet cognitively efficient. In the context of chasing, in which a predator (the “wolf”) pursues a prey (the “sheep”), our model addresses the computational challenge of identifying target agents among varying numbers of distractor objects, despite a quadratic increase in the number of possible interactions as more objects appear in a scene. By comparing modeling results with human psychophysics in several studies, we show that the effectiveness and efficiency of human perceived animacy can be explained by a Bayesian ideal observer model with realistic cognitive constraints. These results provide an understanding of perceived animacy at the algorithmic level—how it is achieved by cognitive mechanisms such as attention and working memory, and how it can be integrated with higher‐level reasoning about social agency.  相似文献   

5.
We present an introduction to Bayesian inference as it is used in probabilistic models of cognitive development. Our goal is to provide an intuitive and accessible guide to the what, the how, and the why of the Bayesian approach: what sorts of problems and data the framework is most relevant for, and how and why it may be useful for developmentalists. We emphasize a qualitative understanding of Bayesian inference, but also include information about additional resources for those interested in the cognitive science applications, mathematical foundations, or machine learning details in more depth. In addition, we discuss some important interpretation issues that often arise when evaluating Bayesian models in cognitive science.  相似文献   

6.
For over 300 years, the humble triangle has served as the paradigmatic example of the problem of abstraction. How can we have the idea of a general triangle even though every experience with triangles is with specific ones? Classical cognitive science seemed to provide an answer in symbolic representation. With its easily enumerated necessary and sufficient conditions, the triangle would appear to be an ideal candidate for being represented in a symbolic form. I show that it is not. Across a variety of tasks—drawing, speeded recognition, unspeeded visual judgments, and inference—representations of triangles appear to be graded and context dependent. I show that using the category name “triangle” activates a more prototypical representation than using an arguably coextensive cue, “three-sided polygon”. For example, when asked to draw “triangles” people draw more typical triangles than when asked to draw “three-sided polygons”. Altogether, the results support the view that (even formal) concepts have a graded and flexible structure, which takes on a more prototypical and stable form when activated by category labels.  相似文献   

7.
Previous studies have shown that people often use heuristics in making inferences and that subjective memory experiences, such as recognition or familiarity of objects, can be valid cues for inferences. So far, many researchers have used the binary choice task in which two objects are presented as alternatives (e.g., “Which city has the larger population, city A or city B?”). However, objects can be presented not only as alternatives but also in a question (e.g., “Which country is city X in, country A or country B?”). In such a situation, people can make inferences based on the relationship between the object in the question and each object given as an alternative. In the present study, we call this type of task a “relationships-comparison task.” We modeled the three inference strategies that people could apply to solve it (familiarity-matching [FM; a new heuristic we propose in this study], familiarity heuristic [FH], and knowledge-based inference [KI]) to examine people's inference processes. Through Studies 1, 2, and 3, we found that (a) people tended to rely on heuristics, and that FM (inferences based on similarity in familiarity between objects) well explained participants' inference patterns; (b) FM could work as an ecologically rational strategy for the relationships–comparison task since it could effectively reflect environmental structures, and that the use of FM could be highly replicable and robust; and (c) people could sometimes use a decision strategy like FM, even in their daily lives (consumer behaviors). The nature of the relationships–comparison task and human cognitive processes is discussed.  相似文献   

8.
The purpose of this research was to determine the mechanisms underlying the graphical effect identified by Stone, Yates, and Parker (1997), in which graphical formats for conveying risk information are more effective than numerical formats for increasing risk-avoidant behavior. Two experiments tested whether this graphical effect occurred because the graphical formats used by Stone et al. highlighted the number of people harmed by the focal hazard, causing the decisions to be based mainly on the number of people harmed (which we label the “foreground”) at the expense of the total number of people at risk of harm (which we call the “background”). Specifically, two graphical formats were developed that displayed pictorially both the number of people harmed and the total number at risk, and use of these display formats eliminated the graphical effect. We thus propose that the previously discussed graphical effect was in fact a manifestation of a more general foreground:background salience effect, whereby displays that highlight the number of people harmed at the expense of the total number of people at risk of harm lead to greater risk avoidance. Theoretical and practical implications are discussed.  相似文献   

9.
Many philosophers have claimed that Bayesianism can provide a simple justification for hypothetico-deductive (H-D) inference, long regarded as a cornerstone of the scientific method. Following up a remark of van Fraassen (1985), we analyze a problem for the putative Bayesian justification of H-D inference in the case where what we learn from observation is logically stronger than what our theory implies. Firstly, we demonstrate that in such cases the simple Bayesian justification does not necessarily apply. Secondly, we identify a set of sufficient conditions for the mismatch in logical strength to be justifiably ignored as a “harmless idealization”. Thirdly, we argue, based upon scientific examples, that the pattern of H-D inference of which there is a ready Bayesian justification is only rarely the pattern that one actually finds at work in science. Whatever the other virtues of Bayesianism, the idea that it yields a simple justification of a pervasive pattern of scientific inference appears to have been oversold.  相似文献   

10.
11.
大量有关人类归因判断的研究表明,人类经常违反理性概率公理。Tversky和Kahneman(1983)使用Linda问题等特定场景的研究发现,人们系统性地表现出违反理性推断标准,判断合取事件发生概率大于其组成事件发生概率,称之为合取谬误,并用人们使用代表性启发式判断概率来解释该现象产生的原因。然而使用启发式观点对合取谬误现象进行解释过于模糊不清。该文首先介绍了合取谬误现象及其解释模型,然后应用Li(1994,2004)提出的不确定情形下决策理论——“齐当别”抉择模型对Linda问题中合取谬误产生的原因进行了新的解释  相似文献   

12.
It is unclear how children learn labels for multiple overlapping categories such as “Labrador,” “dog,” and “animal.” Xu and Tenenbaum (2007a) suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and adults. Here, we report data testing a developmental prediction of the Bayesian model—that more knowledge should lead to narrower category inferences when presented with multiple subordinate exemplars. Two experiments did not support this prediction. Children with more category knowledge showed broader generalization when presented with multiple subordinate exemplars, compared to less knowledgeable children and adults. This implies a U‐shaped developmental trend. The Bayesian model was not able to account for these data, even with inputs that reflected the similarity judgments of children. We discuss implications for the Bayesian model, including a combined Bayesian/morphological knowledge account that could explain the demonstrated U‐shaped trend.  相似文献   

13.
Jones M  Love BC 《The Behavioral and brain sciences》2011,34(4):169-88; disuccsion 188-231
The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology - namely, Behaviorism and evolutionary psychology - that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements.  相似文献   

14.
Neuroimaging studies have contributed to a major advance in understanding the neural and cognitive mechanisms underpinning deductive reasoning. However, the dynamics of cognitive events associated with inference making have been largely neglected. Using electroencephalography, the present study aims at describing the rapid sequence of processes involved in performing transitive inference (A B; B C therefore “A C”; with AB meaning “A is to the left of B”). The results indicate that when the second premise can be integrated into the first one (e.g. A B; B C) its processing elicits a P3b component. In contrast, when the second premise cannot be integrated into the first premise (e.g. A B; D C), a P600-like components is elicited. These ERP components are discussed with respect to cognitive expectations.  相似文献   

15.
Statistical inference: learning in artificial neural networks   总被引:1,自引:0,他引:1  
Artificial neural networks (ANNs) are widely used to model low-level neural activities and high-level cognitive functions. In this article, we review the applications of statistical inference for learning in ANNs. Statistical inference provides an objective way to derive learning algorithms both for training and for evaluation of the performance of trained ANNs. Solutions to the over-fitting problem by model-selection methods, based on either conventional statistical approaches or on a Bayesian approach, are discussed. The use of supervised and unsupervised learning algorithms for ANNs are reviewed. Training a multilayer ANN by supervised learning is equivalent to nonlinear regression. The ensemble methods, bagging and arching, described here, can be applied to combine ANNs to form a new predictor with improved performance. Unsupervised learning algorithms that are derived either by the Hebbian law for bottom-up self-organization, or by global objective functions for top-down self-organization are also discussed.  相似文献   

16.
When evaluating cognitive models based on fits to observed data (or, really, any model that has free parameters), parameter estimation is critically important. Traditional techniques like hill climbing by minimizing or maximizing a fit statistic often result in point estimates. Bayesian approaches instead estimate parameters as posterior probability distributions, and thus naturally account for the uncertainty associated with parameter estimation; Bayesian approaches also offer powerful and principled methods for model comparison. Although software applications such as WinBUGS (Lunn, Thomas, Best, & Spiegelhalter, Statistics and Computing, 10, 325–337, 2000) and JAGS (Plummer, 2003) provide “turnkey”-style packages for Bayesian inference, they can be inefficient when dealing with models whose parameters are correlated, which is often the case for cognitive models, and they can impose significant technical barriers to adding custom distributions, which is often necessary when implementing cognitive models within a Bayesian framework. A recently developed software package called Stan (Stan Development Team, 2015) can solve both problems, as well as provide a turnkey solution to Bayesian inference. We present a tutorial on how to use Stan and how to add custom distributions to it, with an example using the linear ballistic accumulator model (Brown & Heathcote, Cognitive Psychology, 57, 153–178. doi: 10.1016/j.cogpsych.2007.12.002, 2008).  相似文献   

17.
In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling‐based approximations are a common way to implement Bayesian inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples—but as samples are costly—how many samples should people use to optimize their total expected or worst‐case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling‐based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource‐constrained cognition.  相似文献   

18.
董达  陈巍 《心理科学》2022,(1):235-241
表征-计算观与具身行动观对认知的本质几乎做了截然相反的强调。近年来,预测加工理论的发展为统一两代认知科学提供了契机。预测加工是层级预测加工与主动预测加工这两大理论部件的合称,前一部件主要继承了第一代认知科学中的层级计算加工进路,后一部件则发扬了第二代认知科学中与行动有关的理论,这两大理论部件被视为同一个统一整合理论的两个不同方面。在当代,预测加工被认为有望成为未来认知科学的新范式。  相似文献   

19.
The ability to understand the goals that drive another person’s actions is an important social and cognitive skill. This is no trivial task, because any given action may in principle be explained by different possible goals (e.g., one may wave ones arm to hail a cab or to swat a mosquito). To select which goal best explains an observed action is a form of abduction. To explain how people perform such abductive inferences, Baker, Tenenbaum, and Saxe (2007) proposed a computational-level theory that formalizes goal inference as Bayesian inverse planning (BIP). It is known that general Bayesian inference–be it exact or approximate–is computationally intractable (NP-hard). As the time required for computationally intractable computations grows excessively fast when scaled from toy domains to the real world, it seems that such models cannot explain how humans can perform Bayesian inferences quickly in real world situations. In this paper we investigate how the BIP model can nevertheless explain how people are able to make goal inferences quickly. The approach that we propose builds on taking situational constraints explicitly into account in the computational-level model. We present a methodology for identifying situational constraints that render the model tractable. We discuss the implications of our findings and reflect on how the methodology can be applied to alternative models of goal inference and Bayesian models in general.  相似文献   

20.
Teaching Bayesian reasoning in less than two hours   总被引:3,自引:0,他引:3  
The authors present and test a new method of teaching Bayesian reasoning, something about which previous teaching studies reported little success. Based on G. Gigerenzer and U. Hoffrage's (1995) ecological framework, the authors wrote a computerized tutorial program to train people to construct frequency representations (representation training) rather than to insert probabilities into Bayes's rule (rule training). Bayesian computations are simpler to perform with natural frequencies than with probabilities, and there are evolutionary reasons for assuming that cognitive algorithms have been developed to deal with natural frequencies. In 2 studies, the authors compared representation training with rule training; the criteria were an immediate learning effect, transfer to new problems, and long-term temporal stability. Rule training was as good in transfer as representation training, but representation training had a higher immediate learning effect and greater temporal stability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号