首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We present a framework for the rational analysis of elemental causal induction-learning about the existence of a relationship between a single cause and effect-based upon causal graphical models. This framework makes precise the distinction between causal structure and causal strength: the difference between asking whether a causal relationship exists and asking how strong that causal relationship might be. We show that two leading rational models of elemental causal induction, DeltaP and causal power, both estimate causal strength, and we introduce a new rational model, causal support, that assesses causal structure. Causal support predicts several key phenomena of causal induction that cannot be accounted for by other rational models, which we explore through a series of experiments. These phenomena include the complex interaction between DeltaP and the base-rate probability of the effect in the absence of the cause, sample size effects, inferences from incomplete contingency tables, and causal learning from rates. Causal support also provides a better account of a number of existing datasets than either DeltaP or causal power.  相似文献   

2.
People learn quickly when reasoning about causal relationships, making inferences from limited data and avoiding spurious inferences. Efficient learning depends on abstract knowledge, which is often domain or context specific, and much of it must be learned. While such knowledge effects are well documented, little is known about exactly how we acquire knowledge that constrains learning. This work focuses on knowledge of the functional form of causal relationships; there are many kinds of relationships that can apply between causes and their effects, and knowledge of the form such a relationship takes is important in order to quickly identify the real causes of an observed effect. We developed a hierarchical Bayesian model of the acquisition of knowledge of the functional form of causal relationships and tested it in five experimental studies, considering disjunctive and conjunctive relationships, failure rates, and cross-domain effects. The Bayesian model accurately predicted human judgments and outperformed several alternative models.  相似文献   

3.
When two possible causes of an outcome are under consideration, contingency information concerns each possible combination of presence and absence of the two causes with occurrences and nonoccurrences of the outcome. White (2008) proposed that such judgements could be predicted by a weighted averaging model integrating these kinds of contingency information. The weights in the model are derived from the hypothesis that causal judgements seek to meet two main aims, accounting for occurrences of the outcome and estimating the strengths of the causes. Here it is shown that the model can explain many but not all relevant published findings. The remainder can be explained by reasoning about interactions between the two causes, by scenario-specific effects, and by variations in cell weight depending on quantity of available information. An experiment is reported that supports this argument. The review and experimental results support the case for a cognitive model of causal judgement in which different kinds of contingency information are utilised to satisfy particular aims of the judgement process.  相似文献   

4.
In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people’s inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., “X occasionally causes A”). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes’ prior probabilities and the effects’ likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework.  相似文献   

5.
A theory of categorization is presented in which knowledge of causal relationships between category features is represented in terms of asymmetric and probabilistic causal mechanisms. According to causal‐model theory, objects are classified as category members to the extent they are likely to have been generated or produced by those mechanisms. The empirical results confirmed that participants rated exemplars good category members to the extent their features manifested the expectations that causal knowledge induces, such as correlations between feature pairs that are directly connected by causal relationships. These expectations also included sensitivity to higher‐order feature interactions that emerge from the asymmetries inherent in causal relationships. Quantitative fits of causal‐model theory were superior to those obtained with extensions to traditional similarity‐based models that represent causal knowledge either as higher‐order relational features or “prior exemplars” stored in memory.  相似文献   

6.
Information about the structure of a causal system can come in the form of observational data—random samples of the system's autonomous behavior—or interventional data—samples conditioned on the particular values of one or more variables that have been experimentally manipulated. Here we study people's ability to infer causal structure from both observation and intervention, and to choose informative interventions on the basis of observational data. In three causal inference tasks, participants were to some degree capable of distinguishing between competing causal hypotheses on the basis of purely observational data. Performance improved substantially when participants were allowed to observe the effects of interventions that they performed on the systems. We develop computational models of how people infer causal structure from data and how they plan intervention experiments, based on the representational framework of causal graphical models and the inferential principles of optimal Bayesian decision‐making and maximizing expected information gain. These analyses suggest that people can make rational causal inferences, subject to psychologically reasonable representational assumptions and computationally reasonable processing constraints.  相似文献   

7.
Ali N  Chater N  Oaksford M 《Cognition》2011,119(3):403-418
In this paper, two experiments are reported investigating the nature of the cognitive representations underlying causal conditional reasoning performance. The predictions of causal and logical interpretations of the conditional diverge sharply when inferences involving pairs of conditionals—such as if P1then Q and if P2then Q—are considered. From a causal perspective, the causal direction of these conditionals is critical: are the Picauses of Q; or symptoms caused byQ. The rich variety of inference patterns can naturally be modelled by Bayesian networks. A pair of causal conditionals where Q is an effect corresponds to a “collider” structure where the two causes (Pi) converge on a common effect. In contrast, a pair of causal conditionals where Q is a cause corresponds to a network where two effects (Pi) diverge from a common cause. Very different predictions are made by fully explicit or initial mental models interpretations. These predictions were tested in two experiments, each of which yielded data most consistent with causal model theory, rather than with mental models.  相似文献   

8.
In the real world, causal variables do not come pre-identified or occur in isolation, but instead are embedded within a continuous temporal stream of events. A challenge faced by both human learners and machine learning algorithms is identifying subsequences that correspond to the appropriate variables for causal inference. A specific instance of this problem is action segmentation: dividing a sequence of observed behavior into meaningful actions, and determining which of those actions lead to effects in the world. Here we present a Bayesian analysis of how statistical and causal cues to segmentation should optimally be combined, as well as four experiments investigating human action segmentation and causal inference. We find that both people and our model are sensitive to statistical regularities and causal structure in continuous action, and are able to combine these sources of information in order to correctly infer both causal relationships and segmentation boundaries.  相似文献   

9.
In two experiments, we studied the strategies that people use to discover causal relationships. According to inferential approaches to causal discovery, if people attempt to discover the power of a cause, then they should naturally select the most informative and unambiguous context. For generative causes this would be a context with a low base rate of effects generated by other causes and for preventive causes a context with a high base rate. In the following experiments, we used probabilistic and/or deterministic target causes and contexts. In each experiment, participants observed several contexts in which the effect occurred with different probabilities. After this training, the participants were presented with different target causes whose causal status was unknown. In order to discover the influence of each cause, participants were allowed, on each trial, to choose the context in which the cause would be tested. As expected by inferential theories, the participants preferred to test generative causes in low base rate contexts and preventative causes in high base rate contexts. The participants, however, persisted in choosing the less informative contexts on a substantial minority of trials long after they had discovered the power of the cause. We discuss the matching law from operant conditioning as an alternative explanation of the findings.  相似文献   

10.
In two experiments we tested the prediction derived from Tversky and Kahneman's (1983) work on the causal conjunction fallacy that the strength of the causal connection between constituent events directly affects the magnitude of the causal conjunction fallacy. We also explored whether any effects of perceived causal strength were due to graded output from heuristic Type 1 reasoning processes or the result of analytic Type 2 reasoning processes. As predicted, Experiment 1 demonstrated that fallacy rates were higher for strongly than for weakly related conjunctions. Weakly related conjunctions in turn attracted higher rates of fallacious responding than did unrelated conjunctions. Experiment 2 showed that a concurrent memory load increased rates of fallacious responding for strongly related but not for weakly related conjunctions. We interpret these results as showing that manipulations of the strength of the perceived causal relationship between the conjuncts result in graded output from heuristic reasoning process and that additional mental resources are required to suppress strong heuristic output.  相似文献   

11.
Hayes BK  Rehder B 《Cognitive Science》2012,36(6):1102-1128
Two experiments examined the impact of causal relations between features on categorization in 5- to 6-year-old children and adults. Participants learned artificial categories containing instances with causally related features and noncausal features. They then selected the most likely category member from a series of novel test pairs. Classification patterns and logistic regression were used to diagnose the presence of independent effects of causal coherence, causal status, and relational centrality. Adult classification was driven primarily by coherence when causal links were deterministic (Experiment 1) but showed additional influences of causal status when links were probabilistic (Experiment 2). Children's classification was based primarily on causal coherence in both cases. There was no effect of relational centrality in either age group. These results suggest that the generative model (Rehder, 2003a) provides a good account of causal categorization in children as well as adults.  相似文献   

12.
Two experiments investigated 3–4-year-olds’ ability to infer the causal mechanisms for a pair of lights. In both experiments the exterior of the two lights appeared identical. In Experiment 1, one light displayed a stable activation pattern of a single color while the other light displayed a variable pattern of activation by cycling through a series of different colors (i.e., a more varied effect). Children were asked to judge which light had a more complex internal structure. Four-year-olds were more likely to match the light with the more variable effect with a more complex internal mechanism and the light with the more stable effect with a less complex mechanism. Three-year-olds’ responses were at chance. Experiment 2 replicated this finding when the activation patterns of the two lights were described verbally but never demonstrated. Taken together, these results suggest that 4-year-olds appreciate that the variability of an object’s causal efficacy is related to the complexity of its internal mechanistic structure.  相似文献   

13.
Causal queries about singular cases, which inquire whether specific events were causally connected, are prevalent in daily life and important in professional disciplines such as the law, medicine, or engineering. Because causal links cannot be directly observed, singular causation judgments require an assessment of whether a co-occurrence of two events c and e was causal or simply coincidental. How can this decision be made? Building on previous work by Cheng and Novick (2005) and Stephan and Waldmann (2018), we propose a computational model that combines information about the causal strengths of the potential causes with information about their temporal relations to derive answers to singular causation queries. The relative causal strengths of the potential cause factors are relevant because weak causes are more likely to fail to generate effects than strong causes. But even a strong cause factor does not necessarily need to be causal in a singular case because it could have been preempted by an alternative cause. We here show how information about causal strength and about two different temporal parameters, the potential causes' onset times and their causal latencies, can be formalized and integrated into a computational account of singular causation. Four experiments are presented in which we tested the validity of the model. The results showed that people integrate the different types of information as predicted by the new model.  相似文献   

14.
The current research investigated how lay representations of the causes of an environmental problem may underlie individuals' reasoning about the issue. Naïve participants completed an experiment that involved two main tasks. The causal diagram task required participants to depict the causal relations between a set of factors related to overfishing and to estimate the strength of these relations. The counterfactual task required participants to judge the effect of counterfactual suppositions based on the diagrammed factors. We explored two major questions: (1) what is the relation between individual causal models and counterfactual judgments? Consistent with previous findings (e.g., Green et al., 1998, Br. J. Soc. Psychology, 37, 415), these judgments were best explained by a combination of the strength of both direct and indirect causal paths. (2) To what extent do people use two‐way causal thinking when reasoning about an environmental problem? In contrast to previous research (e.g., White, 2008, Appl. Cogn. Psychology, 22, 559), analyses based on individual causal networks revealed the presence of numerous feedback loops. The studies support the value of analysing individual causal models in contrast to consensual representations. Theoretical and practical implications are discussed in relation to causal reasoning as well as environmental psychology.  相似文献   

15.
Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called 'iterated learning,' in which the responses that people give on one trial are used to generate the data they see on the next, to pinpoint the knowledge that informs people's predictions about everyday events (e.g., predicting the total box office gross of a movie from its current take). In particular, we use this method to discriminate between two models of human judgments: a simple Bayesian model ( Griffiths & Tenenbaum, 2006 ) and a recently proposed alternative model that assumes people store only a few instances of each type of event in memory (Min K ; Mozer, Pashler, & Homaei, 2008 ). Although testing these models using standard experimental procedures is difficult due to differences in the number of free parameters and the need to make assumptions about the knowledge of individual learners, we show that the two models make very different predictions about the outcome of iterated learning. The results of an experiment using this methodology provide a rich picture of how much people know about the distributions of everyday quantities, and they are inconsistent with the predictions of the Min K model. The results suggest that accurate predictions about everyday events reflect relatively sophisticated knowledge on the part of individuals.  相似文献   

16.
Rips LJ 《Cognitive Science》2010,34(2):175-221
Bayes nets are formal representations of causal systems that many psychologists have claimed as plausible mental representations. One purported advantage of Bayes nets is that they may provide a theory of counterfactual conditionals, such as If Calvin had been at the party, Miriam would have left early. This article compares two proposed Bayes net theories as models of people's understanding of counterfactuals. Experiments 1-3 show that neither theory makes correct predictions about backtracking counterfactuals (in which the event of the if-clause occurs after the event of the then-clause), and Experiment 4 shows the same is true of forward counterfactuals. An amended version of one of the approaches, however, can provide a more accurate account of these data.  相似文献   

17.
Young children spend a large portion of their time pretending about non‐real situations. Why? We answer this question by using the framework of Bayesian causal models to argue that pretending and counterfactual reasoning engage the same component cognitive abilities: disengaging with current reality, making inferences about an alternative representation of reality, and keeping this representation separate from reality. In turn, according to causal models accounts, counterfactual reasoning is a crucial tool that children need to plan for the future and learn about the world. Both planning with causal models and learning about them require the ability to create false premises and generate conclusions from these premises. We argue that pretending allows children to practice these important cognitive skills. We also consider the prevalence of unrealistic scenarios in children's play and explain how they can be useful in learning, despite appearances to the contrary.  相似文献   

18.
19.
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely-used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation-maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.  相似文献   

20.
This research examines the relationship between the concept of CAUSE as it is characterized in psychological models of causation and the meaning of causal verbs, such as the verb cause itself. According to focal set models of causation (; ), the concept of CAUSE should be more similar to the concepts of ENABLE and PREVENT than either is to each other. According to a model based on theory of force dynamics, the force dynamic model, the concepts of CAUSE, ENABLE, and PREVENT should be roughly equally similar to one another. The relationship between these predictions and the meaning of causal verbs was examined by having participants sort causal verbs and rate them with respect to the dimensions specified by the two models. The results from five experiments indicated that the force dynamic model provides a better account of the meaning of causal verbs than do focal set models of causation. Implications for causal inference and induction are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号