首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Making judgments by relying on beliefs about the causal relationships between events is a fundamental capacity of everyday cognition. In the last decade, Causal Bayesian Networks have been proposed as a framework for modeling causal reasoning. Two experiments were conducted to provide comprehensive data sets with which to evaluate a variety of different types of judgments in comparison to the standard Bayesian networks calculations. Participants were introduced to a fictional system of three events and observed a set of learning trials that instantiated the multivariate distribution relating the three variables. We tested inferences on chains X1  Y  X2, common cause structures X1  Y  X2, and common effect structures X1  Y  X2, on binary and numerical variables, and with high and intermediate causal strengths. We tested transitive inferences, inferences when one variable is irrelevant because it is blocked by an intervening variable (Markov Assumption), inferences from two variables to a middle variable, and inferences about the presence of one cause when the alternative cause was known to have occurred (the normative “explaining away” pattern). Compared to the normative account, in general, when the judgments should change, they change in the normative direction. However, we also discuss a few persistent violations of the standard normative model. In addition, we evaluate the relative success of 12 theoretical explanations for these deviations.  相似文献   

2.
In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people’s inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., “X occasionally causes A”). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes’ prior probabilities and the effects’ likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework.  相似文献   

3.
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates’ context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.  相似文献   

4.
Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.  相似文献   

5.
6.
How do we make causal judgments? Many studies have demonstrated that people are capable causal reasoners, achieving success on tasks from reasoning to categorization to interventions. However, less is known about the mental processes used to achieve such sophisticated judgments. We propose a new process model—the mutation sampler—that models causal judgments as based on a sample of possible states of the causal system generated using the Metropolis–Hastings sampling algorithm. Across a diverse array of tasks and conditions encompassing over 1,700 participants, we found that our model provided a consistently closer fit to participant judgments than standard causal graphical models. In particular, we found that the biases introduced by mutation sampling accounted for people's consistent, predictable errors that the normative model by definition could not. Moreover, using a novel experimental methodology, we found that those biases appeared in the samples that participants explicitly judged to be representative of a causal system. We conclude by advocating sampling methods as plausible process-level accounts of the computations specified by the causal graphical model framework and highlight opportunities for future research to identify not just what reasoners compute when drawing causal inferences, but also how they compute it.  相似文献   

7.
Knowledge of mechanisms is critical for causal reasoning. We contrasted two possible organizations of causal knowledge—an interconnected causal network, where events are causally connected without any boundaries delineating discrete mechanisms; or a set of disparate mechanisms—causal islands—such that events in different mechanisms are not thought to be related even when they belong to the same causal chain. To distinguish these possibilities, we tested whether people make transitive judgments about causal chains by inferring, given A causes B and B causes C, that A causes C. Specifically, causal chains schematized as one chunk or mechanism in semantic memory (e.g., exercising, becoming thirsty, drinking water) led to transitive causal judgments. On the other hand, chains schematized as multiple chunks (e.g., having sex, becoming pregnant, becoming nauseous) led to intransitive judgments despite strong intermediate links ((Experiments 1–3). Normative accounts of causal intransitivity could not explain these intransitive judgments (Experiments 4 and 5).  相似文献   

8.
Young children spend a large portion of their time pretending about non‐real situations. Why? We answer this question by using the framework of Bayesian causal models to argue that pretending and counterfactual reasoning engage the same component cognitive abilities: disengaging with current reality, making inferences about an alternative representation of reality, and keeping this representation separate from reality. In turn, according to causal models accounts, counterfactual reasoning is a crucial tool that children need to plan for the future and learn about the world. Both planning with causal models and learning about them require the ability to create false premises and generate conclusions from these premises. We argue that pretending allows children to practice these important cognitive skills. We also consider the prevalence of unrealistic scenarios in children's play and explain how they can be useful in learning, despite appearances to the contrary.  相似文献   

9.
Four experiments with preschool-aged children test the hypothesis that engaging in explanation promotes inductive reasoning on the basis of shared causal properties as opposed to salient (but superficial) perceptual properties. In Experiments 1a and 1b, 3- to 5-year-old children prompted to explain during a causal learning task were more likely to override a tendency to generalize according to perceptual similarity and instead extend an internal feature to an object that shared a causal property. Experiment 2 replicated this effect of explanation in a case of label extension (i.e., categorization). Experiment 3 demonstrated that explanation improves memory for clusters of causally relevant (non-perceptual) features, but impairs memory for superficial (perceptual) features, providing evidence that effects of explanation are selective in scope and apply to memory as well as inference. In sum, our data support the proposal that engaging in explanation influences children’s reasoning by privileging inductively rich, causal properties.  相似文献   

10.
《Cognitive development》2005,20(1):87-101
Causal reasoning is the core and basis of cognition about the objective world. This experiment studied the development of causal reasoning in 86 3.5–4.5-year-olds using a ramp apparatus with two input holes and two output holes [Frye, D., Zelazo, P. D., & Palfai, T. (1995). Theory of mind and rule-based reasoning. Cognitive Development 10, 483–527]. Results revealed that: (1) children performed better on cause–effect inferences than on effect–cause inferences; (2) there was an effect of rule complexity such that uni-dimensional causal inferences were easier than bi-dimensional inferences which, in turn, were easier than tri-dimensional causal inferences; and (3) children's causal reasoning develops rapidly between the ages of age of 3.5 and 4 years.  相似文献   

11.
The application of the formal framework of causal Bayesian Networks to children’s causal learning provides the motivation to examine the link between judgments about the causal structure of a system, and the ability to make inferences about interventions on components of the system. Three experiments examined whether children are able to make correct inferences about interventions on different causal structures. The first two experiments examined whether children’s causal structure and intervention judgments were consistent with one another. In Experiment 1, children aged between 4 and 8 years made causal structure judgments on a three‐component causal system followed by counterfactual intervention judgments. In Experiment 2, children’s causal structure judgments were followed by intervention judgments phrased as future hypotheticals. In Experiment 3, we explicitly told children what the correct causal structure was and asked them to make intervention judgments. The results of the three experiments suggest that the representations that support causal structure judgments do not easily support simple judgments about interventions in children. We discuss our findings in light of strong interventionist claims that the two types of judgments should be closely linked.  相似文献   

12.
Research on human causal induction has shown that people have general prior assumptions about causal strength and about how causes interact with the background. We propose that these prior assumptions about the parameters of causal systems do not only manifest themselves in estimations of causal strength or the selection of causes but also when deciding between alternative causal structures. In three experiments, we requested subjects to choose which of two observable variables was the cause and which the effect. We found strong evidence that learners have interindividually variable but intraindividually stable priors about causal parameters that express a preference for causal determinism (sufficiency or necessity; Experiment 1). These priors predict which structure subjects preferentially select. The priors can be manipulated experimentally (Experiment 2) and appear to be domain‐general (Experiment 3). Heuristic strategies of structure induction are suggested that can be viewed as simplified implementations of the priors.  相似文献   

13.
Experiences of having caused a certain outcome may arise from motor predictions based on action–outcome probabilities and causal inferences based on pre-activated outcome representations. However, when and how both indicators combine to affect such self-agency experiences is still unclear. Based on previous research on prediction and inference effects on self-agency, we propose that their (combined) contribution crucially depends on whether people have knowledge about the causal relation between actions and outcomes that is relevant to subsequent self-agency experiences. Therefore, we manipulated causal knowledge that was either relevant or irrelevant by varying the probability of co-occurrence (50% or 80%) of specific actions and outcomes. Afterwards, we measured self-agency experiences in an action–outcome task where outcomes were primed or not. Results showed that motor prediction only affected self-agency when relevant actions and outcomes were learned to be causally related. Interestingly, however, inference effects also occurred when no relevant causal knowledge was acquired.  相似文献   

14.
Do We “do”?     
A normative framework for modeling causal and counterfactual reasoning has been proposed by Spirtes, Glymour, and Scheines (1993; cf. Pearl, 2000). The framework takes as fundamental that reasoning from observation and intervention differ. Intervention includes actual manipulation as well as counterfactual manipulation of a model via thought. To represent intervention, Pearl employed the do operator that simplifies the structure of a causal model by disconnecting an intervened-on variable from its normal causes. Construing the do operator as a psychological function affords predictions about how people reason when asked counterfactual questions about causal relations that we refer to as undoing, a family of effects that derive from the claim that intervened-on variables become independent of their normal causes. Six studies support the prediction for causal (A causes B) arguments but not consistently for parallel conditional (if A then B) ones. Two of the studies show that effects are treated as diagnostic when their values are observed but nondiagnostic when they are intervened on. These results cannot be explained by theories that do not distinguish interventions from other sorts of events.  相似文献   

15.
This research examined the conditions under which people who have more chronic doubt about their ability to make sense of social behavior (i.e., are causally uncertain; [Weary and Edwards, 1994] and [Weary and Edwards, 1996]) are more likely to adjust their dispositional inferences for a target’s behaviors. Using a cognitive busyness manipulation within the attitude attribution paradigm, we found in Study 1 that higher causal uncertainty predicted increased correction of dispositional inferences, but only when participants had sufficient attentional resources to devote to the task. In Study 2, we found that higher-causal uncertainty predicted greater inferential correction, but only when the additional information provided a more compelling alternative explanation for the observed behavior. Results of this research are discussed in terms of their relevance to the Causal Uncertainty (Weary & Edwards, 1994) and dispositional inference models.  相似文献   

16.
In two experiments, we investigated the relative impact of causal beliefs and empirical evidence on both decision making and causal judgments, and whether this relative impact could be altered by previous experience. Participants had to decide which of two alternatives would attain a higher outcome on the basis of four cues. After completing the decision task, they were asked to estimate to what extent each cue was a reliable cause of the outcome. Participants were provided with instructions that causally related two of the cues to the outcome, whereas they received neutral information about the other two cues. Two of the four cues—a causal and a neutral cue—had high validity and were both generative. The remaining two cues had low validity, and were generative in Experiment 1, but almost not related to the outcome in Experiment 2. Selected groups of participants in both experiments received pre-training with either causal or neutral cues, or no pre-training was provided. Results revealed that the impact of causal beliefs and empirical evidence depends on both the experienced pre-training and cue validity. When all cues were generative and participants received pre-training with causal cues, they mostly relied on their causal beliefs, whereas they relied on empirical evidence when they received pre-training with neutral cues. In contrast, when some of the cues were almost not related to the outcome, participants’ responses were primarily influenced by validity and—to a lesser extent—by causal beliefs. In either case, however, the influence of causal beliefs was higher in causal judgments than in decision making. While current theoretical approaches in causal learning focus either on the effect of causal beliefs or empirical evidence, the present research shows that both factors are required to explain the flexibility involved in human inferences.  相似文献   

17.
Previous research suggests that children can infer causal relations from patterns of events. However, what appear to be cases of causal inference may simply reduce to children recognizing relevant associations among events, and responding based on those associations. To examine this claim, in Experiments 1 and 2, children were introduced to a “blicket detector,” a machine that lit up and played music when certain objects were placed upon it. Children observed patterns of contingency between objects and the machine’s activation that required them to use indirect evidence to make causal inferences. Critically, associative models either made no predictions, or made incorrect predictions about these inferences. In general, children were able to make these inferences, but some developmental differences between 3- and 4-year-olds were found. We suggest that children’s causal inferences are not based on recognizing associations, but rather that children develop a mechanism for Bayesian structure learning. Experiment 3 explicitly tests a prediction of this account. Children were asked to make an inference about ambiguous data based on the base rate of certain events occurring. Four-year-olds, but not 3-year-olds were able to make this inference.  相似文献   

18.
In making causal inferences, children must both identify a causal problem and selectively attend to meaningful evidence. Four experiments demonstrate that verbally framing an event (“Which animals make Lion laugh?”) helps 4-year-olds extract evidence from a complex scene to make accurate causal inferences. Whereas framing was unnecessary when evidence was isolated, children required it to extract and reason about evidence embedded in a more complex scene. Subtler framing stating the causal problem, but not highlighting the relevant variables, was equally effective. Simply making the causal relationship more perceptually obvious did facilitate children's inferences, but not to the level of verbal framing. These results illustrate how children's causal reasoning relies on scaffolding from adults.  相似文献   

19.
Causal reasoning is crucial to people’s decision making in probabilistic environments. It may rely directly on data about covariation between variables (correspondence) or on inferences based on reasonable constraints if larger causal models are constructed based on local relations (coherence). For causal chains an often assumed constraint is transitivity. For probabilistic causal relations, mismatches between such transitive inferences and direct empirical evidence may lead to distortions of empirical evidence. Previous work has shown that people may use the generative local causal relations A → B and B → C to infer a positive indirect relation between events A and C, despite data showing that these events are actually independent (von Sydow et al. in Proceedings of the thirty-first annual conference of the cognitive science society. Cognitive Science Society, Austin, 2009, Proceedings of the 32nd annual conference of the cognitive science society. Cognitive Science Society, Austin, 2010, Mem Cogn 44(3):469–487, 2016). Here we used a sequential learning scenario to investigate how transitive reasoning in intransitive situations with negatively related distal events may relate to betting behavior. In three experiments participants bet as if they were influenced by a transitivity assumption, even when the data strongly contradicted transitivity.  相似文献   

20.
The psychology of reasoning is increasingly considering agents' values and preferences, achieving greater integration with judgment and decision making, social cognition, and moral reasoning. Some of this research investigates utility conditionals, ‘‘if p then q’’ statements where the realization of p or q or both is valued by some agents. Various approaches to utility conditionals share the assumption that reasoners make inferences from utility conditionals based on the comparison between the utility of p and the expected utility of q. This article introduces a new parameter in this analysis, the underlying causal structure of the conditional. Four experiments showed that causal structure moderated utility‐informed conditional reasoning. These inferences were strongly invited when the underlying structure of the conditional was causal, and significantly less so when the underlying structure of the conditional was diagnostic. This asymmetry was only observed for conditionals in which the utility of q was clear, and disappeared when the utility of q was unclear. Thus, an adequate account of utility‐informed inferences conditional reasoning requires three components: utility, probability, and causal structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号