首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Data selection and natural sampling: Probabilities do matter   总被引:4,自引:0,他引:4  
Probabilistic accounts of Wason's selection task (Oaksford & Chater, 1994, 1996) are controversial, with some researchers failing to replicate the predicted effects of probability manipulations. This paper reports a single experiment in which participants sampled the data naturally-that is, sequentially. The proportions of possible data types (i.e., cards in the selection task) also reflected the probability manipulation. Other than this procedural difference, the materials were the same as those in Oberauer, Wilhelm, and Rosas-Diaz's (1999) Experiment 3, which failed to show probabilistic effects. Significant probabilistic effects were observed. Moreover, in a comparative model-fitting exercise, a revised version of the information gain model (Hattori, 1999, 2002; Oaksford & Chater, in press-b) was shown to provide better fits to these data than did competing explanations.  相似文献   

2.
We report two experiments testing a central prediction of the probabilistic account of reasoning provided by Oaksford and Chater (2001): Acceptance of standard conditional inferences, card choices in the Wason selection task, and quantifiers chosen for conclusions from syllogisms should vary as a function of the frequency of the concepts involved. Frequency was manipulated by a probability-learning phase preceding the reasoning tasks to simulate natural sampling. The effects predicted by Oaksford and Chater (2001) were not obtained with any of the three paradigms.  相似文献   

3.
Since it first appeared, there has been much research and critical discussion on the theory of optimal data selection as an explanation of Wason’s (1966, 1968) selection task (Oaksford & Chater, 1994). In this paper, this literature is reviewed, and the theory of optimal data selection is reevaluated in its light. The information gain model is first located in the current theoretical debate in the psychology of reasoning concerning dual processes in human reasoning. A model comparison exercise is then presented that compares a revised version of the model with its theoretical competitors. Tests of the novel predictions of the model are then reviewed. This section also reviews experiments claimed not to be consistent with optimal data selection. Finally, theoretical criticisms of optimal data selection are discussed. It is argued either that the revised model accounts for them or that they do not stand up under analysis. It is concluded that some version of the optimal data selection model still provides the best account of the selection task. Consequently, the conclusion of Oaksford and Chater’s (1994) original rational analysis (Anderson, 1990), that people’s hypothesis-testing behavior on this task is rational and well adapted to the environment, still stands.  相似文献   

4.
Green, Over, and Pyne's (1997) paper (hereafter referred to as “GOP”) seems to provide a novel approach to examining probabilistic effects in Wason's selection task. However, in this comment, it is argued that their chosen experimental paradigm confounds most of their results. The task demands of the externalisation procedure (Green, 1995) enforce a correlation between card selections and the probability of finding a counterexample, which was the main finding of GOP's experiments. Consequently GOP cannot argue that their data support Kirby's (1994) proposal that people's normal strategy in the selection task is to seek falsifying evidence. Despite this methodological problem, effects of the probability of the antecedent (p) of a conditional rule, if p then q, predicted by Kirby (1994) and by Oaksford and Chater (1994) were observed, although they were inconsistent between Experiments 1 and 2. Moreover, the probability estimates that GOP collected, which are not vulnerable to that methodological criticism, do support the idea that when P (p)> P (q), participants revise P (p) down as suggested by Oaksford and Chater (1994).  相似文献   

5.
One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task, the participant is presented with four cards and a conditional rule (e.g., “If there is an A on one side of the card, there is always a 2 on the other side”). Participants are asked which cards should be turned to verify whether or not the rule holds. In this simple task, participants consistently provide answers that are incorrect according to formal logic. To account for these errors, several models have been proposed, one of the most prominent being the information gain model (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). This model is based on the assumption that people independently select cards based on the expected information gain of turning a particular card. In this article, we present two estimation methods to fit the information gain model: a maximum likelihood procedure (programmed in R) and a Bayesian procedure (programmed in WinBUGS). We compare the two procedures and illustrate the flexibility of the Bayesian hierarchical procedure by applying it to data from a meta-analysis of the Wason task (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). We also show that the goodness of fit of the information gain model can be assessed by inspecting the posterior predictives of the model. These Bayesian procedures make it easy to apply the information gain model to empirical data. Supplemental materials may be downloaded along with this article from .  相似文献   

6.
In this article, 41 models of covariation detection from 2 × 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in covariation detection (McKenzie & Mikkelsen, 2007) and data selection (Hattori, 2002; Oaksford & Chater, 1994, 2003). The results were supportive of the new model. To investigate its explanatory adequacy, a rational analysis using two computer simulations was conducted. These simulations revealed the environmental conditions and the memory restrictions under which the new model best approximates the normative model of covariation detection in these tasks. They thus demonstrated the adaptive rationality of the new model.  相似文献   

7.
The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.  相似文献   

8.
Oaksford (2001) considers that the findings of Espino, Santamaría, and García Madruga (2000a) could be explained by the Probability Heuristics Model (PHM) proposed by Chater and Oaksford (1999). He specifically voices three objections, the two main ones being based on the fact that PHM is not a theory about syllogism representation. If this is the case, we consider that PHM cannot explain our data, because most of them were registered before the participants evaluated the conclusion. We argue that only a theory at the representational level can property explain these data.  相似文献   

9.
Dual-process theories come in many forms. They draw on the distinction between associative, heuristic, tacit, intuitive, or implicit processes (System 1) and rule-based, analytic, explicit processes (System 2). We present the results of contextual manipulations that have a bearing on the supposed primacy of System 1 (Stanovich & West, 2000). Experiment 1 showed that people who evaluated logically valid or invalid conditional inferences under a timing constraint (N=56), showed a smaller effect of logical validity than did people who were not placed under a timing constraint (N= 44). Experiment 2 similarly showed that stressing the logical constraint that only inferences that follow necessarily are to be endorsed (N = 36) increased the size of the validity effect, as compared to that of participants (N=33) given the standard instruction to make "logical" inferences. These findings concur with the thesis in dual-processing frameworks that "Rationality-2 processes" (Evans & Over, 1996), "test procedures" (Chater & Oaksford, 1999), or "conclusion validation processes" (Johnson-Laird & Byrne, 1991; Schroyens, Schaeken, & d'Ydewalle, 2001) serve to override the results of System 1 processes.  相似文献   

10.
Barrouillet P  Gauffroy C  Lecas JF 《Psychological review》2008,115(3):760-71; discussion 771-2
The mental model theory of conditional reasoning presented by P. N. Johnson-Laird and R. M. J. Byrne (2002) has recently been the subject of criticisms (e.g., J. St. B. T. Evans, D. E. Over, & S. J. Handley, 2005). The authors argue that the theoretical conflict can be resolved by differentiating 2 kinds of reasoning, reasoning about possibilities given the truth of assertions and reasoning about the truth of assertions given possibilities. The standard mental model theory accounts for the former kind of reasoning but does not adequately account for the latter, contrary to the suppositional approach favored by J. St. B. T. Evans et al. (2005). The authors thus propose a modified mental model theory of conditionals that reconciles the 2 theoretical approaches. It is demonstrated that this theory is able to explain the key findings that have been opposed to the standard theory by J. St. B. T. Evans et al. and makes new predictions that are empirically verified.  相似文献   

11.
Three experiments examined the influence of a second rule on the pattern of card selections on Wason's selection task. In Experiment 1 participants received a version of the task with a single test rule or one of two versions of the task with the same original test rule together with a second rule. The probability of q was manipulated in the two-rules conditions by varying the size of the antecedent set in the second rule. The results showed a significant suppression of q card and not-p card selections in the alternative-rule conditions, but no difference as a function of antecedent set size. In Experiment 2 the size of the antecendent set in the two-rules conditions was manipulated using the context of a computer printing double-sided cards. The results showed a significant reduction of q card selections in the two-rules conditions, but no effect of p set size. In Experiment 3 the scenario accompanying the rule was manipulated, and it specified a single alternative antecedent or a number of alternative antecedents. The q card selection rates were not affected by the scenario manipulation but again were suppressed by the presence of a second rule. Our results suggest that people make inferences about the unseen side of the cards when engaging with the task and that these inferences are systematically influenced by the presence of a second rule, but are not influenced by the probabilistic characteristics of this rule. These findings are discussed in the context of decision theoretic views of selection task performance (Oaksford & Chater, 1994).  相似文献   

12.
Oaksford and Chater (1994) proposed to analyse the Wason selection task as an inductive instead of a deductive task. Applying Bayesian statistics, they concluded that the cards that participants tend to select are those with the highest expected information gain. Therefore, their choices seem rational from the perspective of optimal data selection. We tested a central prediction from the theory in three experiments: card selection frequencies should be sensitive to the subjective probability of occurrence for individual cards. In Experiment 1, expected frequencies of the p- and the q-card were manipulated independently by concepts referring to large vs. small sets. Although the manipulation had an effect on card selection frequencies, there was only a weak correlation between the predicted and the observed patterns. In the second experiment, relative frequencies of individual cards were manipulated more directly by explicit frequency information. In addition, participants estimated probabilities for the four logical cases and of the conditional statement itself. The experimental manipulations strongly affected the probability estimates, but were completely unrelated to card selections. This result was replicated in a third experiment. We conclude that our data provide little support for optimal data selection theory.  相似文献   

13.
14.
The four dominant theories of reasoning from conditionals are translated into formal models: The theory of mental models (Johnson-Laird, P. N., & Byrne, R. M. J. (2002). Conditionals: a theory of meaning, pragmatics, and inference. Psychological Review, 109, 646-678), the suppositional theory (Evans, J. S. B. T., & Over, D. E. (2004). If. Oxford: Oxford University Press), a dual-process variant of the model theory (Verschueren, N., Schaeken, W., & d'Ydewalle, G. (2005). A dual-process specification of causal conditional reasoning. Thinking &Reasoning, 11, 278-293), and the probabilistic theory (Oaksford, M., Chater, N., & Larkin, J. (2000). Probabilities and polarity biases in conditional inference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 883-899). The first three theories are formalized as multinomial models. The models are applied to the frequencies of patterns of acceptance or rejection across the four basic inferences modus ponens, acceptance of the consequent, denial of the antecedent, and modus tollens. Model fits are assessed for two large data sets, one representing reasoning with abstract, basic conditionals, the other reflecting reasoning with pseudo-realistic causal and non-causal conditionals. The best account of the data was provided by a modified version of the mental-model theory, augmented by directionality, and by the dual-process model.  相似文献   

15.
The information gain model (Oaksford and Chater, Psychological Review 101, 608–631, 1994) advocates that participants attempt to achieve a larger expected information gain when they have to test an if-then rule or hypothesis. However, acquisition of larger expected information gain could also be operational when participants do not have to test a hypothesis. This study devised a new task to investigate whether participants would seek larger expected information gain when they were not required to test a hypothesis. The task required participants to select one out of two balance scales for weighing coins in order to detect an underweight coin. We discovered that participants more frequently selected the balance scale that provided smaller expected information gain. This finding suggests that the preference for larger expected information gain may not apply to non-hypothesis testing settings.  相似文献   

16.
Using a novel referent size-selection task, MacDonald, Joordens, and Seergobin (1999; MacDonald & Joordens, 2000) found that negative priming persisted even when participants were encouraged to attend to distractors before selectively responding to targets. This finding suggested that negative priming is not caused by processes that operate on stimuli that are to be ignored in the traditional selective attention sense. Mackintosh, Mathews, and Holden's (2002) attempt to replicate the MacDonald et al. study resulted in the discovery of possible artifacts in the referent size-selection task, thereby making the implications with respect to the role of attention less clear. In the present study, we describe a different method for directing attention to distractors in a negative priming context, one that does not suffer from the same potential artifacts as the referent size-selection task. Our results are consistent with those found by MacDonald et al., in that negative priming persisted even when participants were explicitly encouraged to attend to distractors. Implications are discussed in the context of the related concepts of selective attention (e.g., Broadbent, 1965) versus selection for action (e.g., Allport, 1987).  相似文献   

17.
We examine the extent to which retrieval from very long-term autobiographical memory is similar when participants are asked to retrieve from widely differing periods of time. Three groups of 20 participants were given 4 min to recall autobiographical events from the last 5 weeks, 5 months, or 5 years. Following recall, the participants dated their events. Similar retrieval rates, relative recency effects, and relative lag-recency effects were found, despite the fact that the considered time scales varied by a factor of 52. These data are broadly consistent with the principle of recency, the principle of contiguity (Howard & Kahana, 2002), and scale similarity in the rates of recall (Brown, Neath, & Chater, 2007; Maylor, Chater, & Brown, 2001). These findings are taken as support for models of memory that predict time scale similarity in retrieval, such as SIMPLE (Brown et al., 2007) and TCM (Howard & Kahana, 2002).  相似文献   

18.
We report five experiments showing that the activation of the end-terms of a syllogism is determined by their position in the composite model of the premises. We show that it is not determined by the position of the terms in the rule being applied (Ford, 1994), by the syntactic role of the terms in the premises (Polk & Newell, 1995; Wetherick & Gilhooly, 1990), by the type of conclusion (Chater & Oaksford, 1999), or by the terms from the source premise (Stenning & Yule, 1997). In our first experiment we found that after reading a categorical premise, the most active term is the last term in the premise. In Experiments 2, 3, and 4 we demonstrated that this pattern of activity is due to the position of the concepts in the model of the premises, regardless of the delay after reading the premises (150 or 2000 msec) or the quantity of the quantifiers (universal or existential). The fifth experiment showed that the pattern switches around after participants evaluate a conclusion. We propose that the last element in the model maintains a higher level of activity during the comprehension process because it is generally used to attach the incoming information. After this process, the first term becomes more active because it is the concept to which the whole representation is referred. These results are predicted by the mental model theory (Johnson-Laird & Byrne, 1991), but not by the verbal reasoning theory (Polk & Newell, 1995), the graphical methods theory (Yule & Stenning, 1992), the attachment-heuristic theory (Chater & Oaksford, 1999), or the mental rules theory (Ford, 1994).  相似文献   

19.
Espino, Santamaría, and García-Madruga (2000) report three results on the time taken to respond to a probe word occurring as end term in the premises of a syllogistic argument. They argue that these results can only be predicted by the theory of mental models. It is argued that two of these results, on differential reaction times to end-terms occurring in different premises and in different figures, are consistent with Chater and Oaksford's (1999) probability heuristics model (PHM). It is argued that the third finding, on different reaction times between figures, does not address the issue of processing difficulty where PHM predicts no differences between figures. It is concluded that Espino et al.'s results do not discriminate between theories of syllogistic reasoning as effectively as they propose.  相似文献   

20.
Oaksford, Chater, and Larkin (2000) have suggested that people actually use everyday probabilistic reasoning when making deductive inferences. In two studies, we explicitly compared probabilistic and deductive reasoning with identical if-then conditional premises with concrete content. In the first, adults were given causal premises with one strongly associated antecedent and were asked to make standard deductive inferences or to judge the probabilities of conclusions. In the second, reasoners were given scenarios presenting a causal relation with zero to three potential alternative antecedents. The participants responded to each set of problems under both deductive and probabilistic instructions. The results show that deductive and probabilistic inferences are not isomorphic. Probabilistic inferences can model deductive responses only using a limited, very high threshold model, which is equivalent to a simple retrieval model. These results provide a clearer understanding of the relations between probabilistic and deductive inferences and the limitations of trying to consider these two forms of inference as having a single underlying process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号