首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When we try to identify causal relationships, how strong do we expect that relationship to be? Bayesian models of causal induction rely on assumptions regarding people’s a priori beliefs about causal systems, with recent research focusing on people’s expectations about the strength of causes. These expectations are expressed in terms of prior probability distributions. While proposals about the form of such prior distributions have been made previously, many different distributions are possible, making it difficult to test such proposals exhaustively. In Experiment 1 we used iterated learning—a method in which participants make inferences about data generated based on their own responses in previous trials—to estimate participants’ prior beliefs about the strengths of causes. This method produced estimated prior distributions that were quite different from those previously proposed in the literature. Experiment 2 collected a large set of human judgments on the strength of causal relationships to be used as a benchmark for evaluating different models, using stimuli that cover a wider and more systematic set of contingencies than previous research. Using these judgments, we evaluated the predictions of various Bayesian models. The Bayesian model with priors estimated via iterated learning compared favorably against the others. Experiment 3 estimated participants’ prior beliefs concerning different causal systems, revealing key similarities in their expectations across diverse scenarios.  相似文献   

2.
Although we live in a complex and multi-causal world, learners often lack sufficient data and/or cognitive resources to acquire a fully veridical causal model. The general goal of making precise predictions with energy-efficient representations suggests a generic prior favoring causal models that include a relatively small number of strong causes. Such “sparse and strong” priors make it possible to quickly identify the most potent individual causes, relegating weaker causes to secondary status or eliminating them from consideration altogether. Sparse-and-strong priors predict that competition will be observed between candidate causes of the same polarity (i.e., generative or else preventive) even if they occur independently. For instance, the strength of a moderately strong cause should be underestimated when an uncorrelated strong cause also occurs in the general learning environment, relative to when a weaker cause also occurs. We report three experiments investigating whether independently-occurring causes (either generative or preventive) compete when people make judgments of causal strength. Cue competition was indeed observed for both generative and preventive causes. The data were used to assess alternative computational models of human learning in complex multi-causal situations.  相似文献   

3.
People learn quickly when reasoning about causal relationships, making inferences from limited data and avoiding spurious inferences. Efficient learning depends on abstract knowledge, which is often domain or context specific, and much of it must be learned. While such knowledge effects are well documented, little is known about exactly how we acquire knowledge that constrains learning. This work focuses on knowledge of the functional form of causal relationships; there are many kinds of relationships that can apply between causes and their effects, and knowledge of the form such a relationship takes is important in order to quickly identify the real causes of an observed effect. We developed a hierarchical Bayesian model of the acquisition of knowledge of the functional form of causal relationships and tested it in five experimental studies, considering disjunctive and conjunctive relationships, failure rates, and cross-domain effects. The Bayesian model accurately predicted human judgments and outperformed several alternative models.  相似文献   

4.
The study tests the hypothesis that conditional probability judgments can be influenced by causal links between the target event and the evidence even when the statistical relations among variables are held constant. Three experiments varied the causal structure relating three variables and found that (a) the target event was perceived as more probable when it was linked to evidence by a causal chain than when both variables shared a common cause; (b) predictive chains in which evidence is a cause of the hypothesis gave rise to higher judgments than diagnostic chains in which evidence is an effect of the hypothesis; and (c) direct chains gave rise to higher judgments than indirect chains. A Bayesian learning model was applied to our data but failed to explain them. An explanation-based hypothesis stating that statistical information will affect judgments only to the extent that it changes beliefs about causal structure is consistent with the results.  相似文献   

5.
In existing models of causal induction, 4 types of covariation information (i.e., presence/absence of an event followed by presence/absence of another event) always exert identical influences on causal strength judgments (e.g., joint presence of events always suggests a generative causal relationship). In contrast, we suggest that, due to expectations developed during causal learning, learners give varied interpretations to covariation information as it is encountered and that these interpretations influence the resulting causal beliefs. In Experiments 1A-1C, participants' interpretations of observations during a causal learning task were dynamic, expectation based, and, furthermore, strongly tied to subsequent causal judgments. Experiment 2 demonstrated that adding trials of joint absence or joint presence of events, whose roles have been traditionally interpreted as increasing causal strengths, could result in decreased overall causal judgments and that adding trials where one event occurs in the absence of another, whose roles have been traditionally interpreted as decreasing causal strengths, could result in increased overall causal judgments. We discuss implications for traditional models of causal learning and how a more top-down approach (e.g., Bayesian) would be more compatible with the current findings.  相似文献   

6.
Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning.  相似文献   

7.
8.
The application of the formal framework of causal Bayesian Networks to children’s causal learning provides the motivation to examine the link between judgments about the causal structure of a system, and the ability to make inferences about interventions on components of the system. Three experiments examined whether children are able to make correct inferences about interventions on different causal structures. The first two experiments examined whether children’s causal structure and intervention judgments were consistent with one another. In Experiment 1, children aged between 4 and 8 years made causal structure judgments on a three‐component causal system followed by counterfactual intervention judgments. In Experiment 2, children’s causal structure judgments were followed by intervention judgments phrased as future hypotheticals. In Experiment 3, we explicitly told children what the correct causal structure was and asked them to make intervention judgments. The results of the three experiments suggest that the representations that support causal structure judgments do not easily support simple judgments about interventions in children. We discuss our findings in light of strong interventionist claims that the two types of judgments should be closely linked.  相似文献   

9.
梁莘娅  杨艳云 《心理科学》2016,39(5):1256-1267
结构方程模型已被广泛应用于心理学、教育学、以及社会科学领域的统计分析中。结构方程模型分析中最常用的估计方法是基于正 态分布的估计量,比如极大似然估计法。这些方法需要满足两个假设。第一, 理论模型必须正确地反映变量与变量之间的关系,称为结构假 设。第二,数据必须符合多元正态分布,称为分布假设。如果这些假设不满足,基于正态分布的估计量就有可能导致不正确的卡方指数、不 正确的拟合度、以及有偏差的参数估计和参数估计的标准误。在实际应用中,几乎所有的理论模型都不能准确地解释变量与变量之间的关系, 数据也常常呈非多元正态分布。为此,一些新的估计方法得以发展。这些方法要么在理论上不要求数据呈多元正态分布,要么对因数据呈非 正态分布而导致的不正确结果进行纠正。当前较为流行的两种方法是稳健极大似然估计和贝叶斯估计。稳健极大似然估计是应用 Satorra and Bentler (1994) 的方法对不正确的卡方指数和参数估计的标准误进行调整,而参数估计和用极大似然方法得出的完全等同。贝叶斯估计方法则是 基于贝叶斯定理,其要点是:参数的后验分布是由参数的先验分布和数据似然值相乘而得来。后验分布常用马尔科夫蒙特卡洛算法来进行模拟。 对于稳健极大似然估计和贝叶斯估计这两种方法之间的优劣比较,先前的研究只局限于理论模型是正确的情境。而本研究则着重于理论模型 是错误的情境,同时也考虑到数据呈非正态分布的情境。本研究所采用的模型是验证性因子模型,数据全部由计算机模拟而来。数据的生成 取决于三个因素:8 类因子结构,3 种变量分布,和3 组样本量。这三个因素产生72 个模拟条件(72=8x3x3)。每个模拟条件下生成2000 个 数据组,每个数据组都拟合两个模型,一个是正确模型、一个是错误模型。每个模型都用两种估计方法来拟合:稳健极大似然估计法和贝叶 斯估计方法。贝叶斯估计方法中所使用的先验分布是无信息先验分布。结果分析主要着重于模型拒绝率、拟合度、参数估计、和参数估计的 标准误。研究的结果表明:在样本量充足的情况下,两种方法得出的参数估计非常相似。当数据呈非正态分布时,贝叶斯估计法比稳健极大 似然估计法更好地拒绝错误模型。但是,当样本量不足且数据呈正态分布时,贝叶斯估计在拒绝错误模型和参数估计上几乎没有优势,甚至 在一些条件下,比稳健极大似然法要差。  相似文献   

10.
Latent variable models with many categorical items and multiple latent constructs result in many dimensions of numerical integration, and the traditional frequentist estimation approach, such as maximum likelihood (ML), tends to fail due to model complexity. In such cases, Bayesian estimation with diffuse priors can be used as a viable alternative to ML estimation. This study compares the performance of Bayesian estimation with ML estimation in estimating single or multiple ability factors across 2 types of measurement models in the structural equation modeling framework: a multidimensional item response theory (MIRT) model and a multiple-indicator multiple-cause (MIMIC) model. A Monte Carlo simulation study demonstrates that Bayesian estimation with diffuse priors, under various conditions, produces results quite comparable with ML estimation in the single- and multilevel MIRT and MIMIC models. Additionally, an empirical example utilizing the Multistate Bar Examination is provided to compare the practical utility of the MIRT and MIMIC models. Structural relationships among the ability factors, covariates, and a binary outcome variable are investigated through the single- and multilevel measurement models. The article concludes with a summary of the relative advantages of Bayesian estimation over ML estimation in MIRT and MIMIC models and suggests strategies for implementing these methods.  相似文献   

11.
Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when deciding between alternative causal structures. In three experiments, we requested subjects to choose which of two observable variables was the cause and which the effect. We found strong evidence that learners have interindividually variable but intraindividually stable priors about causal parameters that express a preference for causal determinism (sufficiency or necessity; Experiment 1). These priors predict which structure subjects preferentially select. The priors can be manipulated experimentally (Experiment 2) and appear to be domain‐general (Experiment 3). Heuristic strategies of structure induction are suggested that can be viewed as simplified implementations of the priors.  相似文献   

12.
Causal learning enables humans and other animals not only to predict important events or outcomes, but also to control their occurrence in the service of needs and desires. Computational theories assume that causal judgments are based on an estimate of the contingency between a causal cue and an outcome. However, human causal learning exhibits many of the characteristics of the associative learning processes thought to underlie animal conditioning. One problem for associative theory arises from the finding that judgments of the causal power of a cue can be revalued retrospectively after learning episodes when that cue is not present. However, if retrieved representations of cues can support learning, retrospective revaluation is anticipated by modified versions of standard associative theories.  相似文献   

13.
样例学习条件下的因果力估计   总被引:2,自引:1,他引:1  
在逐个呈现因果样例的条件下,考察单一因果关系因果力估计的特点,同时检验联想解释,概率对比模型,权重DP模型,效力PC理论和pCI规则。实验让65名大学生被试估计不同化学药物影响动物基因变异的能力。实验结果表明:(1)对产生原因的因果力估计符合权重DP模型;(2)对预防原因的因果力估计较多符合效力PC理论;(3)因果力估计具有复杂多样性,难以用统一的模式加以描述和概括。  相似文献   

14.
Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called 'iterated learning,' in which the responses that people give on one trial are used to generate the data they see on the next, to pinpoint the knowledge that informs people's predictions about everyday events (e.g., predicting the total box office gross of a movie from its current take). In particular, we use this method to discriminate between two models of human judgments: a simple Bayesian model ( Griffiths & Tenenbaum, 2006 ) and a recently proposed alternative model that assumes people store only a few instances of each type of event in memory (Min K ; Mozer, Pashler, & Homaei, 2008 ). Although testing these models using standard experimental procedures is difficult due to differences in the number of free parameters and the need to make assumptions about the knowledge of individual learners, we show that the two models make very different predictions about the outcome of iterated learning. The results of an experiment using this methodology provide a rich picture of how much people know about the distributions of everyday quantities, and they are inconsistent with the predictions of the Min K model. The results suggest that accurate predictions about everyday events reflect relatively sophisticated knowledge on the part of individuals.  相似文献   

15.
16.
Temporal predictability refers to the regularity or consistency of the time interval separating events. When encountering repeated instances of causes and effects, we also experience multiple cause-effect temporal intervals. Where this interval is constant it becomes possible to predict when the effect will follow from the cause. In contrast, interval variability entails unpredictability. Three experiments investigated the extent to which temporal predictability contributes to the inductive processes of human causal learning. The authors demonstrated that (a) causal relations with fixed temporal intervals are consistently judged as stronger than those with variable temporal intervals, (b) that causal judgments decline as a function of temporal uncertainty, and (c) that this effect remains undiminished with increased learning time. The results therefore clearly indicate that temporal predictability facilitates causal discovery. The authors considered the implications of their findings for various theoretical perspectives, including associative learning theory, the attribution shift hypothesis, and causal structure models.  相似文献   

17.
Recent studies have shown that people have the capacity to derive interventional predictions for previously unseen actions from observational knowledge, a finding that challenges associative theories of causal learning and reasoning (e.g., Meder, Hagmayer, & Waldmann, 2008). Although some researchers have claimed that such inferences are based mainly on qualitative reasoning about the structure of a causal system (e.g., Sloman, 2005), we propose that people use both the causal structure and its parameters for their inferences. We here employ an observational trial-by-trial learning paradigm to test this prediction. In Experiment 1, the causal strength of the links within a given causal model was varied, whereas in Experiment 2, base rate information was manipulated while keeping the structure of the model constant. The results show that learners’ causal judgments were strongly affected by the observed learning data despite being presented with identical hypotheses about causal structure. The findings show furthermore that participants correctly distinguished between observations and hypothetical interventions. However, they did not adequately differentiate between hypothetical and counterfactual interventions.  相似文献   

18.
In the real world, causal variables do not come pre-identified or occur in isolation, but instead are embedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variables for causal inference. A specific instance of this problem is action segmentation: dividing a sequence of observed behavior into meaningful actions, and determining which of those actions lead to effects in the world. Here we present a Bayesian analysis of how statistical and causal cues to segmentation should optimally be combined, as well as four experiments investigating human action segmentation and causal inference. We find that both people and our model are sensitive to statistical regularities and causal structure in continuous action, and are able to combine these sources of information in order to correctly infer both causal relationships and segmentation boundaries.  相似文献   

19.
Lee MD  Vanpaemel W 《Cognitive Science》2008,32(8):1403-1424
This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked example that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in category learning tasks. The VAM allows for a wide variety of category representations to be inferred, but this article shows how a hierarchical Bayesian analysis can provide a unifying explanation of the representational possibilities using 2 parameters. One parameter controls the emphasis on abstraction in category representations, and the other controls the emphasis on similarity. Using 30 previously published data sets, this work shows how inferences about these parameters, and about the category representations they generate, can be used to evaluate data in terms of the ongoing exemplar versus prototype and similarity versus rules debates in the literature. Using this concrete example, this article emphasizes the advantages of hierarchical Bayesian models in converting model selection problems to parameter estimation problems, and providing one way of specifying theoretically based priors for competing models.  相似文献   

20.
This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam’s window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号