首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reinforcement of least-frequent sequences of choices   总被引:3,自引:3,他引:0       下载免费PDF全文
When a pigeon's choices between two keys are probabilistically reinforced, as in discrete trial probability learning procedures and in concurrent variable-interval schedules, the bird tends to maximize, or to choose the alternative with the higher probability of reinforcement. In concurrent variable-interval schedules, steady-state matching, which is an approximate equality between the relative frequency of a response and the relative frequency of reinforcement of that response, has previously been obtained only as a consequence of maximizing. In the present experiment, maximizing was impossible. A choice of one of two keys was reinforced only if it formed, together with the three preceding choices, the sequence of four successive choices that had occurred least often. This sequence was determined by a Bernoulli-trials process with parameter p. Each of three pigeons matched when p was ½ or ¼. Therefore, steady-state matching by individual birds is not always a consequence of maximizing. Choice probability varied between successive reinforcements, and sequential statistics revealed dependencies which were adequately described by a Bernoulli-trials process with p depending on the time since the preceding reinforcement.  相似文献   

2.
With respect to the influence of postevent information, Schooler and Tanaka (1991) made a useful distinction between composite recollections--in which subjects retrieve "items from both the original and the postevent sources" (p.97)--and compromise recollections--in which subjects retrieve" at least one feature that cannot be exclusively associated with either the original or the postevent sources, but which reflects some compromise between [the] two" (p.97). Schooler and Tanaka argued that only the latter constitutes good evidence for blend-memory representations of the CHARM-type. As it turns out, Schooler and Tanaka's intuitions (and Metcalfe & Bjork's, initially) are faulty. Compromise recall--defined as a preference for an intervening alternative over either of the actually presented alternatives--is not normally a prediction of CHARM and may not be a prediction of composite-trace models in general. Only under specialized conditions--a systematic displacement of the test alternatives or a systematic shift attributable to assimilation to prior semantic knowledge--will computer simulations of CHARM produce unimodal compromise recollection. Equally surprising is the fact that separate-trace models, under a different set of conditions, can predict compromise recollection.  相似文献   

3.
Colin Howson 《Synthese》2007,156(3):491-512
Many people regard utility theory as the only rigorous foundation for subjective probability, and even de Finetti thought the betting approach supplemented by Dutch Book arguments only good as an approximation to a utility-theoretic account. I think that there are good reasons to doubt this judgment, and I propose an alternative, in which the probability axioms are consistency constraints on distributions of fair betting quotients. The idea itself is hardly new: it is in de Finetti and also Ramsey. What is new is that it is shown that probabilistic consistency and consequence can be defined in a way formally analogous to the way these notions are defined in deductive (propositional) logic. The result is a free-standing logic which does not pretend to be a theory of rationality and is therefore immune to, among other charges, that of “logical omniscience”.  相似文献   

4.
Dennis Dieks 《Synthese》2007,156(3):427-439
According to the Doomsday Argument we have to rethink the probabilities we assign to a soon or not so soon extinction of mankind when we realize that we are living now, rather early in the history of mankind. Sleeping Beauty finds herself in a similar predicament: on learning the date of her first awakening, she is asked to re-evaluate the probabilities of her two possible future scenarios. In connection with Doom, I argue that it is wrong to assume that our ordinary probability judgements do not already reflect our place in history: we justify the predictive use we make of the probabilities yielded by science (or other sources of information) by our knowledge of the fact that we live now, a certain time before the possible occurrence of the events the probabilities refer to. Our degrees of belief should change drastically when we forget the date—importantly, this follows without invoking the “Self Indication Assumption”. Subsequent conditionalization on information about which year it is cancels this probability shift again. The Doomsday Argument is about such probability shifts, but tells us nothing about the concrete values of the probabilities—for these, experience provides the only basis. Essentially the same analysis applies to the Sleeping Beauty problem. I argue that Sleeping Beauty “thirders” should be committed to thinking that the Doomsday Argument is ineffective; whereas “halfers” should agree that doom is imminent—but they are wrong.  相似文献   

5.
Most theories of probability judgment assume that judgments are made by comparing the strength of a focal hypothesis relative to the strength of alternative hypotheses. In contrast, research suggests that frequency judgments are assessed using a non-comparative process; the strength of the focal hypothesis is assessed without comparing it to the strength of alternative hypotheses. We tested this distinction between probability and frequency judgments using the alternative outcomes paradigm (Windschitl, Young, & Jenson, 2002). Assuming that judgments of probability (but not judgments of frequency) entail comparing the focal hypothesis with alternative hypotheses, we hypothesized that probability judgments would be sensitive to the distribution of the alternative hypotheses and would be negatively correlated with individual differences in working memory (WM) capacity. In contrast, frequency judgments should be unrelated to the distribution of the alternatives and uncorrelated with WM-capacity. Results supported the hypotheses.  相似文献   

6.
7.
Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of “underthinking” rather than “overthinking”: They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.  相似文献   

8.
Undermatching and overmatching as deviations from the matching law   总被引:3,自引:3,他引:0       下载免费PDF全文
A model of performance under concurrent variable-interval reinforcement schedules that takes as its starting point the hypothetical “burst” structure of operant responding is presented. Undermatching and overmatching are derived from two separate, and opposing, tendencies. The first is a tendency to allocate a certain proportion of response bursts randomly to a response alternative without regard for the rate of reinforcement it provides, others being allocated according to the simple matching law. This produces undermatching. The second is a tendency to prolong response bursts that have a high probability of initiation relative to those for which initiation probability is lower. This process produces overmatching. A model embodying both tendencies predicts (1) that undermatching will be more common than overmatching, (2) that overmatching, when it occurs, will tend to be of limited extent. Both predictions are consistent with available data. The model thus accounts for undermatching and overmatching deviations from the matching law in terms of additional processes added on to behavior allocation obeying the simple matching relation. Such a model thus enables processes that have been hypothesized to underlie matching, such as some type of reinforcement rate or probability optimization, to remain as explanatory mechanisms even though the simple matching law may not generally be obeyed.  相似文献   

9.
In this paper a theory of finitistic and frequentistic approximations — in short: f-approximations — of probability measures P over a countably infinite outcome space N is developed. The family of subsets of N for which f-approximations converge to a frequency limit forms a pre-Dynkin system . The limiting probability measure over D can always be extended to a probability measure over , but this measure is not always σ-additive. We conclude that probability measures can be regarded as idealizations of limiting frequencies if and only if σ-additivity is not assumed as a necessary axiom for probabilities. We prove that σ-additive probability measures can be characterized in terms of so-called canonical and in terms of so-called full f-approximations. We also show that every non-σ-additive probability measure is f-approximable, though neither canonically nor fully f-approximable. Finally, we transfer our results to probability measures on open or closed formulas of first-order languages.  相似文献   

10.
Three experiments examined the effects of opportunities for an alternative response (drinking) on positive behavioral contrast of rats' food-reinforced bar pressing. In both Experiments 1 and 2 the baseline multiple variable-interval schedules were rich (variable interval 10-s), and contrast was examined both with and without a water bottle present. In Experiment 1, the rats were not water deprived. When one component of the multiple schedule was changed to extinction, the rate of bar pressing increased in the constant component (positive behavioral contrast). The magnitude of contrast was larger when the bottle was absent than when it was present, as predicted by the matching law. Drinking did not shift from the constant variable-interval component to the extinction component, as might have been expected from competition theory. In Experiment 2, the rats were water deprived. Contrast was larger when the bottle was present than when it was absent, and drinking did shift to the extinction component, as predicted by competition theory. In Experiment 3, water-deprived rats responded on leaner multiple variable-interval schedules (60-s) in the presence of a water bottle. When one component was changed to extinction, contrast did not occur, and drinking did not shift to the extinction component. The present results suggest that there are at least two different sources of behavioral contrast: “competitive” contrast, observed when an alternative response occurs with high probability, and “noncompetitive” contrast, observed when an alternative response occurs with low probability. The results, in conjunction with earlier studies, also suggest that the form of the alternative response and the rate of food reinforcement provided by the multiple schedule combine to determine the amount of contrast.  相似文献   

11.
Recent memory theory has emphasized the concept ofneed probability—that is, the probability that a given piece of learned information will be tested at some point in the future. It has been proposed that, in real-world situations, need probability declines over time and that the memory-loss rate is calibrated to match the progressive reduction in need probability (J. R. Anderson & Schooler, 1991). The present experiments were designed to examine the influence of the slope of the need-probability curve on the slope of the retention curve. On each of several trials, subjects memorized a list of digits, then retained the digits in memory for 1, 2, 4, 8, or 16 sec. Some trials ended with a recall test; other trials ended with the message, “no test.” In Experiment 1, the likelihood of encountering a memory test (i.e., the need probability) was made to either increase or decrease as the retention interval increased; in Experiment 2, need probability either was flat (invariant across retention intervals) or decreased as the retention interval increased. The results indicated that the shape of the need-probability curve influenced the slope of the retention curve (Experiment 1) and that the effect became larger as the experimental session progressed (Experiment 2). The findings support the notion that memory adapts to need probabilities and that the rate of forgetting is influenced by the slope of the need-probability curve. In addition, all of the forgetting curves approximated a power function, suggesting that need probability influences the slope but not the form of forgetting.  相似文献   

12.
The reference class problem is your problem too   总被引:2,自引:0,他引:2  
Alan Hájek 《Synthese》2007,156(3):563-585
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains.  相似文献   

13.
A row (or column) of an n×n matrix complies with Regular Minimality (RM) if it has a unique minimum entry which is also a unique minimum entry in its column (respectively, row). The number of violations of RM in a matrix is defined as the number of rows (equivalently, columns) that do not comply with RM. We derive a formula for the proportion of n×n matrices with a given number of violations of RM among all n×n matrices with no tied entries. The proportion of matrices with no more than a given number of violations can be treated as the p-value of a permutation test whose null hypothesis states that all permutations of the entries of a matrix without ties are equiprobable, and the alternative hypothesis states that RM violations occur with lower probability than predicted by the null hypothesis. A matrix with ties is treated as being represented by all matrices without ties that have the same set of strict inequalities among their entries.  相似文献   

14.
A set of experiments on immediate probed recognition of digit triples is reported in which the variables were list length (five, six, seven, or eight triples), the probability that a probe was old (.33, .5, or .67), and whether the digit triples were presented with an auditory component or articulatory suppression. Previous work had suggested that the false alarm (FA) rate in this paradigm was lower when auditory information was available than when it was not; this observation had led to the development of thepartial matching theory of immediate probed recognition, according to which FAs could arise not only as a result of unlucky guesses but also when new probes shared a first digit in common with a partially retained target triple. It was argued that partial memory representations were less likely following auditory presentation than following articulatory suppression. Partial matching theory is contrasted with therational response theory, according to which all FAs are unlucky guesses; partial matching theory gave a better account of the present experimental data than did rational response theory. However, a logical relationship between the two theories was suggested, a consequence of which was that rational response theory could be modified to include partial matching in such a way as to account for mirror effects, not only in unusually difficult immediate probed recognition tasks, but also in the more commonly studied mixed test list paradigm involving words of high or low frequency.  相似文献   

15.
16.
17.
Numerous studies have found that likelihood judgment typically exhibits subadditivity in which judged probabilities of events are less than the sum of judged probabilities of constituent events. Whereas traditional accounts of subadditivity attribute this phenomenon to deterministic sources, this paper demonstrates both formally and empirically that subadditivity is systematically influenced by the stochastic variability of judged probabilities. First, making rather weak assumptions, we prove that regressive error (or variability) in mapping covert probability judgments to overt responses is sufficient to produce subadditive judgments. Experiments follow in which participants provided repeated probability estimates. The results support our model assumption that stochastic variability is regressive in probability estimation tasks and show the contribution of such variability to subadditivity. The theorems and the experiments focus on within-respondent variability, but most studies use between-respondent designs. Numerical simulations extend the work to contrast within- and between-respondent measures of subadditivity. Methodological implications of all the results are discussed, emphasizing the importance of taking stochastic variability into account when estimating the role of other factors (such as the availability bias) in producing subadditive judgments.  相似文献   

18.
This work investigates the nature of two distinct response patterns in a probabilistic truth table evaluation task, in which people estimate the probability of a conditional on the basis of frequencies of the truth table cases. The conditional-probability pattern reflects an interpretation of conditionals as expressing a conditional probability. The conjunctive pattern suggests that some people treat conditionals as conjunctions, in line with a prediction of the mental-model theory. Experiments 1 and 2 rule out two alternative explanations of the conjunctive pattern. It does not arise from people believing that at least one case matching the conjunction of antecedent and consequent must exist for a conditional to be true, and it does not arise from people adding the converse to the given conditional. Experiment 3 establishes that people's response patterns in the probabilistic truth table task are very consistent across different conditionals, and that the two response patterns generalize to conditionals with negated antecedents and consequents. Individual differences in rating the probability of a conditional were loosely correlated with corresponding response patterns in a classical truth table evaluation task, but there was little association with people's evaluation of deductive inferences from conditionals as premises. A theoretical framework is proposed that integrates elements from the conditional-probability view with the theory of mental models.  相似文献   

19.
While there is evidence that talker-specific details are encoded in the phonetics of the lexicon (Kraljic, Samuel, & Brennan, Psychological Science 19(4):332–228, 2008; Logan, Lively, & Pisoni, Journal of the Acoustical Society of America, 89(2):874-886, 1991) and in sentence processing (Nygaard & Pisoni, Perception & Psychophysics, 60(3):355–376, 1998), it is unclear whether categorical linguistic patterns are also represented in terms of talker-specific details. The present study provides evidence that adult learners form talker-independent representations for productive linguistic patterns. Participants were able to generalize a novel linguistic pattern to unfamiliar talkers. Learners were exposed to spoken words that conformed to a pattern in which vowels of a word agreed in place of articulation, referred to as vowel harmony. All items were presented in the voice of one single talker. Participants were tested on items that included both the familiar talker and an unfamiliar talker. Participants generalized the pattern to novel talkers when the talkers spoke with a familiar accent (Experiment 1), as well as with an unfamiliar accent (Experiment 2). Learners showed a small advantage for talker familiarity when the words were familiar, but not when the words were novel. These results are consistent with a theory of language processing in which the lexicon stores fine-grained, talker-specific phonetic details, but productive linguistic processes are subject to abstract, talker-independent representations.  相似文献   

20.
A statistical manifold Mμ consists of positive functions f such that defines a probability measure. In order to define an atlas on the manifold, it is viewed as an affine space associated with a subspace of the Orlicz space LΦ. This leads to a functional equation whose solution, after imposing the linearity constrain in line with the vector space assumption, gives rise to a general form of mappings between the affine probability manifold and the vector (Orlicz) space. These results generalize the exponential statistical manifold and clarify some foundational issues in non-parametric information geometry.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号