共查询到20条相似文献,搜索用时 0 毫秒
1.
We present a framework for the rational analysis of elemental causal induction-learning about the existence of a relationship between a single cause and effect-based upon causal graphical models. This framework makes precise the distinction between causal structure and causal strength: the difference between asking whether a causal relationship exists and asking how strong that causal relationship might be. We show that two leading rational models of elemental causal induction, DeltaP and causal power, both estimate causal strength, and we introduce a new rational model, causal support, that assesses causal structure. Causal support predicts several key phenomena of causal induction that cannot be accounted for by other rational models, which we explore through a series of experiments. These phenomena include the complex interaction between DeltaP and the base-rate probability of the effect in the absence of the cause, sample size effects, inferences from incomplete contingency tables, and causal learning from rates. Causal support also provides a better account of a number of existing datasets than either DeltaP or causal power. 相似文献
2.
Causal queries about singular cases, which inquire whether specific events were causally connected, are prevalent in daily life and important in professional disciplines such as the law, medicine, or engineering. Because causal links cannot be directly observed, singular causation judgments require an assessment of whether a co-occurrence of two events c and e was causal or simply coincidental. How can this decision be made? Building on previous work by Cheng and Novick (2005) and Stephan and Waldmann (2018), we propose a computational model that combines information about the causal strengths of the potential causes with information about their temporal relations to derive answers to singular causation queries. The relative causal strengths of the potential cause factors are relevant because weak causes are more likely to fail to generate effects than strong causes. But even a strong cause factor does not necessarily need to be causal in a singular case because it could have been preempted by an alternative cause. We here show how information about causal strength and about two different temporal parameters, the potential causes' onset times and their causal latencies, can be formalized and integrated into a computational account of singular causation. Four experiments are presented in which we tested the validity of the model. The results showed that people integrate the different types of information as predicted by the new model. 相似文献
3.
People learn quickly when reasoning about causal relationships, making inferences from limited data and avoiding spurious inferences. Efficient learning depends on abstract knowledge, which is often domain or context specific, and much of it must be learned. While such knowledge effects are well documented, little is known about exactly how we acquire knowledge that constrains learning. This work focuses on knowledge of the functional form of causal relationships; there are many kinds of relationships that can apply between causes and their effects, and knowledge of the form such a relationship takes is important in order to quickly identify the real causes of an observed effect. We developed a hierarchical Bayesian model of the acquisition of knowledge of the functional form of causal relationships and tested it in five experimental studies, considering disjunctive and conjunctive relationships, failure rates, and cross-domain effects. The Bayesian model accurately predicted human judgments and outperformed several alternative models. 相似文献
4.
Young children spend a large portion of their time pretending about non‐real situations. Why? We answer this question by using the framework of Bayesian causal models to argue that pretending and counterfactual reasoning engage the same component cognitive abilities: disengaging with current reality, making inferences about an alternative representation of reality, and keeping this representation separate from reality. In turn, according to causal models accounts, counterfactual reasoning is a crucial tool that children need to plan for the future and learn about the world. Both planning with causal models and learning about them require the ability to create false premises and generate conclusions from these premises. We argue that pretending allows children to practice these important cognitive skills. We also consider the prevalence of unrealistic scenarios in children's play and explain how they can be useful in learning, despite appearances to the contrary. 相似文献
5.
Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called 'iterated learning,' in which the responses that people give on one trial are used to generate the data they see on the next, to pinpoint the knowledge that informs people's predictions about everyday events (e.g., predicting the total box office gross of a movie from its current take). In particular, we use this method to discriminate between two models of human judgments: a simple Bayesian model ( Griffiths & Tenenbaum, 2006 ) and a recently proposed alternative model that assumes people store only a few instances of each type of event in memory (Min K ; Mozer, Pashler, & Homaei, 2008 ). Although testing these models using standard experimental procedures is difficult due to differences in the number of free parameters and the need to make assumptions about the knowledge of individual learners, we show that the two models make very different predictions about the outcome of iterated learning. The results of an experiment using this methodology provide a rich picture of how much people know about the distributions of everyday quantities, and they are inconsistent with the predictions of the Min K model. The results suggest that accurate predictions about everyday events reflect relatively sophisticated knowledge on the part of individuals. 相似文献
6.
In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people’s inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., “X occasionally causes A”). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes’ prior probabilities and the effects’ likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework. 相似文献
7.
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely-used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation-maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners. 相似文献
8.
People's reactions to coincidences are often cited as an illustration of the irrationality of human reasoning about chance. We argue that coincidences may be better understood in terms of rational statistical inference, based on their functional role in processes of causal discovery and theory revision. We present a formal definition of coincidences in the context of a Bayesian framework for causal induction: a coincidence is an event that provides support for an alternative to a currently favored causal theory, but not necessarily enough support to accept that alternative in light of its low prior probability. We test the qualitative and quantitative predictions of this account through a series of experiments that examine the transition from coincidence to evidence, the correspondence between the strength of coincidences and the statistical support for causal structure, and the relationship between causes and coincidences. Our results indicate that people can accurately assess the strength of coincidences, suggesting that irrational conclusions drawn from coincidences are the consequence of overestimation of the plausibility of novel causal forces. We discuss the implications of our account for understanding the role of coincidences in theory change. 相似文献
9.
Florencia Reali 《Cognition》2009,111(3):317-328
The regularization of linguistic structures by learners has played a key role in arguments for strong innate constraints on language acquisition, and has important implications for language evolution. However, relating the inductive biases of learners to regularization behavior in laboratory tasks can be challenging without a formal model. In this paper we explore how regular linguistic structures can emerge from language evolution by iterated learning, in which one person’s linguistic output is used to generate the linguistic input provided to the next person. We use a model of iterated learning with Bayesian agents to show that this process can result in regularization when learners have the appropriate inductive biases. We then present three experiments demonstrating that simulating the process of language evolution in the laboratory can reveal biases towards regularization that might not otherwise be obvious, allowing weak biases to have strong effects. The results of these experiments suggest that people tend to regularize inconsistent word-meaning mappings, and that even a weak bias towards regularization can allow regular languages to be produced via language evolution by iterated learning. 相似文献
10.
Kate Nussenbaum Alexandra O. Cohen Zachary J. Davis David J. Halpern Todd M. Gureckis Catherine A. Hartley 《Cognitive Science》2020,44(9):e12888
Intervening on causal systems can illuminate their underlying structures. Past work has shown that, relative to adults, young children often make intervention decisions that appear to confirm a single hypothesis rather than those that optimally discriminate alternative hypotheses. Here, we investigated how the ability to make informative causal interventions changes across development. Ninety participants between the ages of 7 and 25 completed 40 different puzzles in which they had to intervene on various causal systems to determine their underlying structures. Each puzzle comprised a three- or four-node computer chip with hidden wires. On each trial, participants viewed two possible arrangements of the chip's hidden wires and had to select a single node to activate. After observing the outcome of their intervention, participants selected a wire configuration and rated their confidence in their selection. We characterized participant choices with a Bayesian measurement model that indexed the extent to which participants selected nodes that would best disambiguate the two possible causal structures versus those that had high causal centrality in one of the two causal hypotheses but did not necessarily discriminate between them. Our model estimates revealed that the use of a discriminatory strategy increased through early adolescence. Further, developmental improvements in intervention strategy were related to changes in the ability to accurately judge the strength of evidence that interventions revealed, as indexed by participants' confidence in their selections. Our results suggest that improvements in causal information-seeking extend into adolescence and may be driven by metacognitive sensitivity to the efficacy of previous interventions in discriminating competing ideas. 相似文献
11.
Many of the problems studied in cognitive science are inductive problems, requiring people to evaluate hypotheses in the light of data. The key to solving these problems successfully is having the right inductive biases—assumptions about the world that make it possible to choose between hypotheses that are equally consistent with the observed data. This article explores a novel experimental method for identifying the biases that guide human inductive inferences. The idea behind this method is simple: This article uses the responses produced by a participant on one trial to generate the stimuli that either they or another participant will see on the next. A formal analysis of this \"iterated learning\" procedure, based on the assumption that the learners are Bayesian agents, predicts that it should reveal the inductive biases of these learners, as expressed in a prior probability distribution over hypotheses. This article presents a series of experiments using stimuli based on a well-studied set of category structures, demonstrating that iterated learning can be used to reveal the inductive biases of human learners. 相似文献
12.
How do we make causal judgments? Many studies have demonstrated that people are capable causal reasoners, achieving success on tasks from reasoning to categorization to interventions. However, less is known about the mental processes used to achieve such sophisticated judgments. We propose a new process model—the mutation sampler—that models causal judgments as based on a sample of possible states of the causal system generated using the Metropolis–Hastings sampling algorithm. Across a diverse array of tasks and conditions encompassing over 1,700 participants, we found that our model provided a consistently closer fit to participant judgments than standard causal graphical models. In particular, we found that the biases introduced by mutation sampling accounted for people's consistent, predictable errors that the normative model by definition could not. Moreover, using a novel experimental methodology, we found that those biases appeared in the samples that participants explicitly judged to be representative of a causal system. We conclude by advocating sampling methods as plausible process-level accounts of the computations specified by the causal graphical model framework and highlight opportunities for future research to identify not just what reasoners compute when drawing causal inferences, but also how they compute it. 相似文献
13.
Bob Rehder 《Cognitive Science》2003,27(5):709-748
A theory of categorization is presented in which knowledge of causal relationships between category features is represented in terms of asymmetric and probabilistic causal mechanisms. According to causal‐model theory, objects are classified as category members to the extent they are likely to have been generated or produced by those mechanisms. The empirical results confirmed that participants rated exemplars good category members to the extent their features manifested the expectations that causal knowledge induces, such as correlations between feature pairs that are directly connected by causal relationships. These expectations also included sensitivity to higher‐order feature interactions that emerge from the asymmetries inherent in causal relationships. Quantitative fits of causal‐model theory were superior to those obtained with extensions to traditional similarity‐based models that represent causal knowledge either as higher‐order relational features or “prior exemplars” stored in memory. 相似文献
14.
A normative framework for modeling causal and counterfactual reasoning has been proposed by Spirtes, Glymour, and Scheines (1993; cf. Pearl, 2000). The framework takes as fundamental that reasoning from observation and intervention differ. Intervention includes actual manipulation as well as counterfactual manipulation of a model via thought. To represent intervention, Pearl employed the do operator that simplifies the structure of a causal model by disconnecting an intervened-on variable from its normal causes. Construing the do operator as a psychological function affords predictions about how people reason when asked counterfactual questions about causal relations that we refer to as undoing, a family of effects that derive from the claim that intervened-on variables become independent of their normal causes. Six studies support the prediction for causal (A causes B) arguments but not consistently for parallel conditional (if A then B) ones. Two of the studies show that effects are treated as diagnostic when their values are observed but nondiagnostic when they are intervened on. These results cannot be explained by theories that do not distinguish interventions from other sorts of events. 相似文献
15.
Although we live in a complex and multi-causal world, learners often lack sufficient data and/or cognitive resources to acquire a fully veridical causal model. The general goal of making precise predictions with energy-efficient representations suggests a generic prior favoring causal models that include a relatively small number of strong causes. Such “sparse and strong” priors make it possible to quickly identify the most potent individual causes, relegating weaker causes to secondary status or eliminating them from consideration altogether. Sparse-and-strong priors predict that competition will be observed between candidate causes of the same polarity (i.e., generative or else preventive) even if they occur independently. For instance, the strength of a moderately strong cause should be underestimated when an uncorrelated strong cause also occurs in the general learning environment, relative to when a weaker cause also occurs. We report three experiments investigating whether independently-occurring causes (either generative or preventive) compete when people make judgments of causal strength. Cue competition was indeed observed for both generative and preventive causes. The data were used to assess alternative computational models of human learning in complex multi-causal situations. 相似文献
16.
Martin Tak
《Cognitive Systems Research》2008,9(4):293-311
This article presents a synthetic modeling approach to the problem of grounded construction of concepts. In many computational models of grounded language acquisition and evolution, meanings are created in the process of discrimination between a chosen object and other objects present on the scene of communication. We argue that categories constructed for the purpose of identification rather than discrimination are more suitable for the detached language use (talking about things not present here and now). We describe a semantics based on so-called identification criteria constructed by extracting cross-situational similarities among instances of a category, and present several computational models. In the model of individual category construction, the instances are grouped to categories by common motor programs (affordances), while in the model of social learning, focused on the influence of naming on category formation, entities are considered members of the same category, if they are labeled with the same word by an external teacher. By these two mechanisms, the learner can construct interactionally grounded representation of objects, properties, relations, changes, complex situations and events. We also report and analyze simulation results of an experiment focused on the dynamics of meanings in iterated intergenerational transmission. 相似文献
17.
Mark Steyvers Joshua B. Tenenbaum Eric‐Jan Wagenmakers Ben Blum 《Cognitive Science》2003,27(3):453-489
Information about the structure of a causal system can come in the form of observational data—random samples of the system's autonomous behavior—or interventional data—samples conditioned on the particular values of one or more variables that have been experimentally manipulated. Here we study people's ability to infer causal structure from both observation and intervention, and to choose informative interventions on the basis of observational data. In three causal inference tasks, participants were to some degree capable of distinguishing between competing causal hypotheses on the basis of purely observational data. Performance improved substantially when participants were allowed to observe the effects of interventions that they performed on the systems. We develop computational models of how people infer causal structure from data and how they plan intervention experiments, based on the representational framework of causal graphical models and the inferential principles of optimal Bayesian decision‐making and maximizing expected information gain. These analyses suggest that people can make rational causal inferences, subject to psychologically reasonable representational assumptions and computationally reasonable processing constraints. 相似文献
18.
Peter A. White 《Cognitive Science》2014,38(1):38-75
It is argued that causal understanding originates in experiences of acting on objects. Such experiences have consistent features that can be used as clues to causal identification and judgment. These are singular clues, meaning that they can be detected in single instances. A catalog of 14 singular clues is proposed. The clues function as heuristics for generating causal judgments under uncertainty and are a pervasive source of bias in causal judgment. More sophisticated clues such as mechanism clues and repeated interventions are derived from the 14. Research on the use of empirical information and conditional probabilities to identify causes has used scenarios in which several of the clues are present, and the use of empirical association information for causal judgment depends on the presence of singular clues. It is the singular clues and their origin that are basic to causal understanding, not multiple instance clues such as empirical association, contingency, and conditional probabilities. 相似文献
19.
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age‐appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. 相似文献
20.
《Journal of Cognitive Psychology》2013,25(4):485-506
When two possible causes of an outcome are under consideration, contingency information concerns each possible combination of presence and absence of the two causes with occurrences and nonoccurrences of the outcome. White (2008) proposed that such judgements could be predicted by a weighted averaging model integrating these kinds of contingency information. The weights in the model are derived from the hypothesis that causal judgements seek to meet two main aims, accounting for occurrences of the outcome and estimating the strengths of the causes. Here it is shown that the model can explain many but not all relevant published findings. The remainder can be explained by reasoning about interactions between the two causes, by scenario-specific effects, and by variations in cell weight depending on quantity of available information. An experiment is reported that supports this argument. The review and experimental results support the case for a cognitive model of causal judgement in which different kinds of contingency information are utilised to satisfy particular aims of the judgement process. 相似文献