首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The base-rate fallacy in probability judgments   总被引:1,自引:0,他引:1  
The base-rate fallacy is people's tendency to ignore base rates in favor of, e.g., individuating information (when such is available), rather than integrate the two. This tendency has important implications for understanding judgment phenomena in many clinical, legal, and social-psychological settings. An explanation of this phenomenon is offered, according to which people order information by its perceived degree of relevance, and let high-relevance information dominate low-relevance information. Information is deemed more relevant when it relates more specifically to a judged target case. Specificity is achieved either by providing information on a smaller set than the overall population, of which the target case is a member, or when information can be coded, via causality, as information about the specific members of a given population. The base-rate fallacy is thus the result of pitting what seem to be merely coincidental, therefore low-relevance, base rates against more specific, or causal, information. A series of probabilistic inference problems is presented in which relevance was manipulated with the means described above, and the empirical results confirm the above account. In particular, base rates will be combined with other information when the two kinds of information are perceived as being equally relevant to the judged case.  相似文献   

3.
Social stereotypes may be defined as beliefs that various traits or acts are characteristic of particular social groups. As such, stereotypic beliefs represent subjective estimates of the frequencies of attributes within social groups, and so should be expected to “behave like” base-rate information within the context of judgments of individuals: specifically, individuating target case information should induce subjects to disregard their own stereotypic beliefs. Although the results of previous research are consisten with this prediction, no studies have permitted normative evaluation of stereotypic judgments. Because the hypothesis equates base rates and stereotypes, normative evaluation is essential for demonstrating equivalence between the base-rate fallacy and neglect of stereotypes in the presence of individuating case information. Two experiments were conducted, allowing for normative evaluation of effects of stereotypes on judgments of individuals. The results confirmed the hypothesis and established the generalizability of the effect across controversial and uncontroversial, socially desirable and socially underirable stereotypic beliefs. More generally, an examination of the differences between intuitive and normative statistical models of the judgment task suggest that the base-rate fallacy is but one instance of a general characteristic of intuitive judgment processes: namely, the failure to appropriately adjust evaluations of any one cue in the light of concurrent evaluations of other cues.  相似文献   

4.
People often place undue weight on specific sources of information (case cues) and insufficient weight on more global sources (base rates) even when the latter are highly predictive, a phenomenon termed base-rate neglect. This phenomenon was first demonstrated with paper-and-pencil tasks, and also occurs in a matching-to-sample procedure in which subjects directly experience case sample (cue) accuracy and base rates, and in which discrete, nonverbal choices are made. In two nonverbal experiments, subjects were exposed to hundreds of trials in which they chose between two response options that were both probabilistically reinforced. In Experiment 1, following one of two possible samples (the unpredictive sample), either response was reinforced with a .5 probability. The other sample (predictive) provided reinforcement for matching on 80% of the trials in one condition but in only 20% of the trials in another condition. Subjects' choices following the unpredictive sample were determined primarily by the contingencies in effect for the predictive sample: If matching was reinforced following the predictive sample, subjects tended to match the unpredictive sample as well; if countermatching the predictive sample was generally reinforced, subjects tended to countermatch the unpredictive sample. These results demonstrate only weak control by base rates. In Experiment 2, base rates and sample accuracy were simultaneously varied in opposite directions to keep one set of conditional probabilities constant. Subjects' choices were determined primarily by the overall accuracy of the sample, again demonstrating only weak control by base rates. The same pattern of choice occurred whether this pattern increased or decreased rate of reinforcement. Together, the results of the two experiments provide a clear empirical demonstration of base-rate neglect.  相似文献   

5.
The gambler's fallacy was examined in terms of grouping processes. The gambler's fallacy is the tendency to erroneously believe that for independent events, recent or repeated instances of an outcome (e.g., a series of "heads" when flipping a coin) will make that outcome less likely on an upcoming trial. Grouping was manipulated such that a critical trial following a run of heads or tails was grouped together with previous trials (i.e., the last trial of "Block 1") or was the first trial of another group (the first trial of "Block 2"). As predicted, the gambler's fallacy was evident when the critical trial was grouped with the previous trials, but not when it was arbitrarily grouped with the next block of trials. Discussion centres on the processes underlying the gambler's fallacy and practical implications of these findings.  相似文献   

6.
This article is concerned with the use of base-rate information that is derived from experience in classifying examples of a category. The basic task involved simulated medical decision making in which participants learned to diagnose hypothetical diseases on the basis of symptom information. Alternative diseases differed in their relative frequency or base rates of occurrence. In five experiments initial learning was followed by a series of transfer tests designed to index the use of base-rate information. On these tests, patterns of symptoms were presented that suggested more than one disease and were therefore ambiguous. The alternative or candidate diseases on such tests could differ in their relative frequency of occurrence during learning. For example, a symptom might be presented that had appeared with both a relatively common and a relatively rare disease. If participants are using base-rate information appropriately (according to Bayes' theorem), then they should be more likely to predict that the common disease is present than that the rare disease is present on such ambiguous tests. Current classification models differ in their predictions concerning the use of base-rate information. For example, most prototype models imply an insensitivity to base-rate information, whereas many exemplar-based classification models predict appropriate use of base-rate information. The results reveal a consistent but complex pattern. Depending on the category structure and the nature of the ambiguous tests, participants use base-rate information appropriately, ignore base-rate information, or use base-rate information inappropriately (predict that the rare disease is more likely to be present). To our knowledge, no current categorization model predicts this pattern of results. To account for these results, a new model is described incorporating the ideas of property or symptom competition and context-sensitive retrieval.  相似文献   

7.
Observers completed perceptual categorization tasks that included separate base-rate/payoff manipulations, corresponding simultaneous base-rate/payoff manipulations, and conflicting simultaneous base-rate/payoff manipulations. Performance (1) was closer to optimal for 2:1 than for 3:1 base-rate/payoff ratios and when base rates as opposed to payoffs were manipulated, and (2) was more in line with the predictions from the flat-maxima hypothesis than from the independence assumption of the optimal classifier in corresponding and conflicting simultaneous base-rate/payoff conditions. A hybrid model that instantiated simultaneously the flat-maxima and the competition between reward and accuracy maximization (COBRA) hypotheses was applied to the data. The hybrid model was superior to a model that incorporated the independence assumption, suggesting that violations of the independence assumption are to be expected and are well captured by the flat-maxima hypothesis without requiring any additional assumptions. The parameters indicated that observers' reward-maximizing decision criterion rapidly approaches the optimal value and that more weight is placed on accuracy maximization in separate and corresponding simultaneous base-rate/payoff conditions than in conflicting simultaneous base-rate/payoff conditions.  相似文献   

8.
This article investigates differences in the ways that groups and individuals apply information-processing strategies and fall prey to biases in their judgments. Judgments were made on probabilistic inference problems that involved base-rate and case-specific information. Consistent with hypotheses, when individuals neglect base-rate information in their probability judgments, groups accentuate this tendency. Moreover, when the source of case-specific information is inaccurate, individuals neglect the case-specific information, and groups accentuate this tendency with the base-rate information dominating their probability judgments. In addition, groups accentuate the strategies used by individuals to integrate the base-rate and case-specific information. These results provide strong support for a group accentuation tendency for the application of information-processing biases and the strategies used to integrate information. Discussion reflects upon the relationship of the results of this experiment with other research on base-rate neglect and group judgment. Underlying mechanisms and potential moderators of the group accentuation pattern are also discussed.  相似文献   

9.
In three experiments we investigated whether two procedures of acquiring knowledge about the same causal structure, predictive learning (from causes to effects) versus diagnostic learning (from effects to causes), would lead to different base-rate use in diagnostic judgments. Results showed that learners are capable of incorporating base-rate information in their judgments regardless of the direction in which the causal structure is learned. However, this only holds true for relatively simple scenarios. When complexity was increased, base rates were only used after diagnostic learning, but were largely neglected after predictive learning. It could be shown that this asymmetry is not due to a failure of encoding base rates in predictive learning because participants in all conditions were fairly good at reporting them. The findings present challenges for all theories of causal learning.  相似文献   

10.
Observers completed perceptual categorization tasks that included separate base-rate/payoff manipulations, corresponding simultaneous base-rate/payoff manipulations, and conflicting simultaneous base-rate/payoff manipulations. Performance (1) was closer to optimal for 2:1 than for 3:1 baserate/ payoff ratios and when base rates as opposed to payoffs were manipulated, and (2) was more in line with the predictions from the flat-maxima hypothesis than from the independence assumption of the optimal classifier in corresponding and conflicting simultaneous base-rate/payoff conditions. A hybrid model that instantiated simultaneously the flat-maxima and the competition between reward and accuracy maximization (COBRA) hypotheses was applied to the data. The hybrid model was superior to a model that incorporated the independence assumption, suggesting that violations of the independence assumption are to be expected and are well captured by the flat-maxima hypothesis without requiring any additional assumptions. The parameters indicated that observers’ reward-maximizing decision criterion rapidly approaches the optimal value and that more weight is placed on accuracy maximization in separate and corresponding simultaneous base-rate/payoff conditions than in conflicting simultaneous base-rate/payoff conditions.  相似文献   

11.
Exemplar-memory and adaptive network models were compared in application to category learning data, with special attention to base rate effects on learning and transfer performance. Subjects classified symptom charts of hypothetical patients into disease categories, with informative feedback on learning trials and with the feedback either given or withheld on test trials that followed each fourth of the learning series. The network model proved notably accurate and uniformly superior to the exemplar model in accounting for the detailed course of learning; both the parallel, interactive aspect of the network model and its particular learning algorithm contribute to this superiority. During learning, subjects' performance reflected both category base rates and feature (symptom) probabilities in a nearly optimal manner, a result predicted by both models, though more accurately by the network model. However, under some test conditions, the data showed substantial base-rate neglect, in agreement with Gluck and Bower (1988b).  相似文献   

12.
Performance in a recognition task involving two amplitudes of the same tone was investigaled over a wide range of presentation schedules. The task was arranged so that there was no trial-to-trial feedback or other information regarding the relative frequencies of the two tones. The hit and false alarm rates (the proportion of “loud” responses to loud and soft stimuli, respectively) on any given trial were strongly influenced by the stimulus and response on the preceding trial. In general, Ss tended to repeat the last response and were more accurate after a stimulus alternation than after a stimulus repetition. In addition, hit and false alarm rates were inversely related to the presentation probability of the loud tone, in contrast to the direct relation typically found in signal detection experiments and in recognition experiments with trial-to-trial feedback. A mathematical model incorporating three processes (memory, comparison, and decision) was shown to give a good account of these data.  相似文献   

13.
Context variability refers to the number of preexperimental contexts that are associated with concepts. In four experiments, we investigated the basis for increased recognition memory for low context variability words. Low context variability was associated with greater recollection in the hit rates, and high context variability was associated with greater familiarity in the false alarms. Shortening the study time reduced recollection, but low context variability still influenced recollection in the hit rates. A modality change from study to test also reduced recollection but preserved recollective differences for low versus high context variability items. One interpretation of the results suggests that low context variability evokes more specific and, perhaps, idiosyncratic recollective associations during learning and that these associations support better recognition in the hit rates. By contrast, activating the larger number of associations for high context variability items may be mistaken for familiarity in the false alarm rates.  相似文献   

14.
The standard Engineer-Lawyer problem (Kahneman & Tversky, 1973) points to the failure of reasoners to integrate mentioned base-rate information in arriving at likelihood estimates. Research in this area nevertheless has presupposed that participants respect complementarity (i.e., participants ensure that competing estimates add up to 100%). A survey of the literature leads us to doubt this pre-supposition. We propose that the participants' non-normative performance on the standard Engineer-Lawyer problem reflects a reluctance to view the task probabilistically and that normative responses become more prominent as probabilistic aspects of the task do. In the present experiments, we manipulated two kinds of probabilistic cues and determined the extent to which (1) base rates were integrated and (2) the complementarity constraint was respected. In Experiment 1, six versions of an Engineer-Lawyer-type problem (that varied three levels of cue to complementarity and two base rates) were presented. The results showed that base-rate integration increased as cues to complementary did. Experiment 2 confirmed that Gigerenzer, Hell, and Blank's (1988) random-draw paradigm facilitates base-rate integration; a second measure revealed that it also prompts respect for complementarity. In Experiment 3, we replicated two of our main findings in one procedure while controlling for the potential influence of extraneous task features. We discuss approaches that describe how probabilistic cues might prompt normative responding.  相似文献   

15.
The effects of competition on performance of a video-formatted task were examined in a series of experiments. Two rhesus monkeys (Macaca mulatta) were trained to manipulate a joystick to shoot at moving targets on a computer screen. The task was made competitive by requiring both animals to shoot at the same target and by rewarding only the animal that hit the target first each trial. The competitive task produced a significant and robust speed-accuracy trade-off in performance. The monkeys hit the target in significantly less time on contested than on uncontested trials. However, they required significantly more shots to hit the target on contested trials in relation to uncontested trials. This effect was unchanged when various schedules of reinforcement were introduced in the uncontested trials. This supports the influence of competition qua competition on performance, a point further bolstered by other findings of of behavioral contrast presented here.  相似文献   

16.
Researchers assume that time pressure impairs performance in decision tasks by invoking heuristic processes. In the present study, the authors inquired (a) whether it was possible in some cases for time pressure to improve performance or to alter it without impairing it, and (b) whether the heuristic invoked by base-rate neglect under direct experience can be identified. They used a probability-learning design in 2 experiments, and they measured the choice proportions after each of 2 possible cues in each experiment. In 1 comparison, time pressure increased predictions of the more likely outcome, which improved performance. In 2 comparisons, time pressure changed the choice proportions without affecting performance. In a 4th comparison, time pressure hindered performance. The choice proportions were consistent with heuristic processing that is based on cue matching rather than on cue accuracy, base rates, or posterior probabilities.  相似文献   

17.
The optimality of perceptual categorization performance under manipulations of category discriminability (i.e., d' level), base rates, and payoffs was examined. Base-rate and payoff manipulations across two category discriminabilities allowed a test of the hypothesis that the steepness of the objective reward function affects performance (i.e., the flat-maxima hypothesis), as well as the hypothesis that observers combine base-rate and payoff information independently. Performance was (1) closer to optimal for the steeper objective reward function, in line with the flat-maxima hypothesis, (2) closer to optimal in base-rate conditions than in payoff conditions, and (3) in partial support of the hypothesis that base-rate and payoff knowledge is combined independently. Implications for current theories of base-rate and payoff learning are discussed.  相似文献   

18.
In three experiments, musically trained and untrained adults listened to three repetitions of a 5-note melodic sequence followed by a final melody with either the same tune as those preceding it or differing in one position by one semitone. In Experiment 1, ability to recognize the final sequence was examined as a function of redundancy at the levels of musical structurein a sequence, contour complexity of transpositions in a trial, and trial context in a session. Within a sequence, tones were related as the major or augmented triad; within a trial, the four sequences began on successively higher notes (simple macrocontour) or on randomly selected notes (complex macrocontour); and within a session, trials were either blocked (all major or all augmented) or mixed (major and augmented randomly selected). Performance was superior for major melodies, for systematic transpositions within a trial (simple macrocontours), for blocked trials, and for musically trained listeners. In Experiment 2, we examined further the effect of macrocontour. Performance on simple macrocontours exceeded that on complex, and excluded the possibility that repetition of the 20-note sequences provided the entire benefit of systematic transposition in Experiment 1. The effect of musical structure (major/augmented) was also replicated. In Experiment 3, listeners provided structure ratings of ascending 20-note sequences fromExperiment 2. Ratings onsame trials were higher than those on correspondingdifferent trials, in contrast to performance scores for augmentedsame anddifferent trials in previous experiments. The concept of functional uncertainty was proposed to account for recognition difficulties on augmented same trials. The significant effects of redundancy on all the levels examined confirm the utility of the information-processing framework for thestudy of melodic sequence perception.  相似文献   

19.
In three experiments, musically trained and untrained adults listened to three repetitions of a 5-note melodic sequence followed by a final melody with either the same tune as those preceding it or differing in one position by one semitone. In Experiment 1, ability to recognize the final sequence was examined as a function of redundancy at the levels of musical structure in a sequence, contour complexity of transpositions in a trial, and trial context in a session. Within a sequence, tones were related as the major or augmented triad; within a trial, the four sequences began on successively higher notes (simple macrocontour) or on randomly selected notes (complex macrocontour); and within a session, trials were either blocked (all major or all augmented) or mixed (major and augmented randomly selected). Performance was superior for major melodies, for systematic transpositions within a trial (simple macrocontours), for blocked trials, and for musically trained listeners. In Experiment 2, we examined further the effect of macrocontour. Performance on simple macrocontours exceeded that on complex, and excluded the possibility that repetition of the 20-note sequences provided the entire benefit of systematic transposition in Experiment 1. The effect of musical structure (major/augmented) was also replicated. In Experiment 3, listeners provided structure ratings of ascending 20-note sequences from Experiment 2. Ratings on same trials were higher than those on corresponding different trials, in contrast to performance scores for augmented same and different trials in previous experiments. The concept of functional uncertainty was proposed to account for recognition difficulties on augmented same trials. The significant effects of redundancy on all the levels examined confirm the utility of the information-processing framework for the study of melodic sequence perception.  相似文献   

20.
When ritual murder trials reappeared in central Europe in the late nineteenth and early twentieth centuries, they could not be articulated in pre-Reformation language and symbols. Prosecutors, magistrates, trial judges, and police investigators shared an implicit understanding that a new universe of knowledge was in place in which academic experts and practitioners of science defined the boundaries—linguistic and conceptual—of plausible argument and were to be accorded deference. This does not mean that popular beliefs and understandings of Jewish ritual murder suddenly ceased to be disseminated or no longer influenced courtroom proceedings, or that zealous investigators and prosecutors did not pursue their cases armed with a priori assumptions about likely perpetrators and their motives. But cultural material, psychological predispositions, and even narrative accounts built upon eyewitness testimony could never suffice to move either the state to indict or a jury, or a panel of judges, to convict. Whatever nonrational thinking or prejudices may have accompanied it, the modern ritual murder trial was structured by powerful, if implicit, rules of expression and authority: it could only be articulated through the epistemological categories and idioms of a culture that understood itself to be both rational and scientific. What commands our attention, then, in the Tiszaeszlár, Xanten, and other modern ritual murder trials are the processes whereby ritual murder discourse bent—as it were—to the discipline of modernity, as exemplified by the structures and rules of legal procedure, parliamentary politics, mass-circulation journalism, criminology, medicine, and forensic science.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号