首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The interresponse-time reinforcement contingencies and distributions of interreinforcement intervals characteristic of certain variable-interval schedules were mimicked by reinforcing each key peck with a probability equal to the duration of the interresponse time it terminated, divided by the scheduled mean interreinforcement interval. The interresponse-time reinforcement contingency was then eliminated by basing the probability of reinforcement on the fifth interresponse time preceding the key peck. Even though distributions of interreinforcement intervals were unaffected by this manipulation, response rates consistently increased. A second experiment replicated this effect and showed it to combine additively with that of mean reinforcement rate. These results provide strong support for the contention that current analyses of variable-interval response rates that ignore the inherent interresponse-time reinforcement contingency may be seriously in error.  相似文献   

2.
The reinforcement of least-frequent interresponse times   总被引:4,自引:4,他引:0       下载免费PDF全文
A new schedule of reinforcement was used to maintain key-pecking by pigeons. The schedule reinforced only pecks terminating interresponse times which occurred least often relative to the exponential distribution of interresponse times to be expected from an ideal random generator. Two schedule parameters were varied: (1) the rate constant of the controlling exponential distribution and (2) the probability that a response would be reinforced, given that it met the interresponse-time contingency. Response rate changed quickly and markedly with changes in the rate constant; it changed only slightly with a fourfold change in the reinforcement probability. The schedule produced stable rates and high intra- and inter-subject reliability, yet interresponse time distributions were approximately exponential. Such local interresponse time variability in the context of good overall control suggests that the schedule may be used to generate stable, predictable, yet sensitive baseline rates. Implications for the measurement of rate are discussed.  相似文献   

3.
This article considers procedures for combining individual probability distributions that belong to some “family” into a “group” probability distribution that belongs to the same family. The procedures considered are Vincentizing, in which quantiles are averaged across distributions; generalized Vincentizing, in which the quantiles are transformed before averaging; and pooling based on the distribution function or the probability density function. Some of these results are applied to models of reaction time in psychological experiments.  相似文献   

4.
Listeners are exquisitely sensitive to fine-grained acoustic detail within phonetic categories for sounds and words. Here we show that this sensitivity is optimal given the probabilistic nature of speech cues. We manipulated the probability distribution of one probabilistic cue, voice onset time (VOT), which differentiates word initial labial stops in English (e.g., "beach" and "peach"). Participants categorized words from distributions of VOT with wide or narrow variances. Uncertainty about word identity was measured by four-alternative forced-choice judgments and by the probability of looks to pictures. Both measures closely reflected the posterior probability of the word given the likelihood distributions of VOT, suggesting that listeners are sensitive to these distributions.  相似文献   

5.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation. QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three "shifted" distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibul distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

6.
7.
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.  相似文献   

8.
In this article we present symmetric diffusion networks, a family of networks that instantiate the principles of continuous, stochastic, adaptive and interactive propagation of information. Using methods of Markovion diffusion theory, we formalize the activation dynamics of these networks and then show that they can be trained to reproduce entire multivariate probability distributions on their outputs using the contrastive Hebbion learning rule (CHL). We show that CHL performs gradient descent on an error function that captures differences between desired and obtained continuous multivariate probability distributions. This allows the learning algorithm to go beyond expected values of output units and to approximate complete probability distributions on continuous multivariate activation spaces. We argue that learning continuous distributions is an important task underlying a variety of real-life situations that were beyond the scope of previous connectionist networks. Deterministic networks, like back propagation, cannot learn this task because they are limited to learning average values of independent output units. Previous stochastic connectionist networks could learn probability distributions but they were limited to discrete variables. Simulations show that symmetric diffusion networks can be trained with the CHL rule to approximate discrete and continuous probability distributions of various types.  相似文献   

9.
Permutation tests are based on all possible arrangements of observed data sets. Consequently, such tests yield exact probability values obtained from discrete probability distributions. An exact nondirectional method to combine independent probability values that obey discrete probability distributions is introduced. The exact method is the discrete analog to Fisher's classical method for combining probability values from independent continuous probability distributions. If the combination of probability values includes even one probability value that obeys a sparse discrete probability distribution, then Fisher's classical method may be grossly inadequate.  相似文献   

10.
Three pigeons' pecks were reinforced on 1- and 2-min variable-interval schedules, and frequency distributions of their interresponse times (IRTs) were recorded. The conditional probability that a response would fall into any IRT category was estimated by the interresponse-times-per-opportunity transformation (IRTs/op). The resulting functions were notable chiefly for the relatively low probability of IRTs in the 0.2- to 0.3-sec range; in other respects they varied within and between subjects. The overall level of the curves generally rose over the course of 32 experimental hours, but their shapes changed unsystematically. The shape of the IRT distribution was much the same for VI 1-min and VI 2-min. The variability of these distributions supports the notion that the VI schedule only loosely controls response rate, permitting wide latitude to adventitious effects. There was no systematic evidence that curves changed over sessions to conform to the distribution of reinforcements by IRT.  相似文献   

11.
Abstract.— Previous studies of sampling distributions have been conducted almost exclusively under the assumption that persons behave in accordance with the "fundamental convention" of probability, i.e. that the sum of all probability estimates will equal 1. When this assumption was tested by asking subjects to give "unrestricted" probability estimates of all possible outcomes of samples from a given population, a general tendency of overestimation made the sum of all probabilities exceed 1 to a considerable extent. The subjective sampling distributions appeared to be unaffected by sample size ( N=5 or 10) and number of outcomes, and were flatter than the corresponding "objective" sampling distributions.  相似文献   

12.
Data were obtained with rats on the effects of interresponse time contingent reinforcement of the lever press response using schedules in which interresponse times falling within either of two temporal intervals could be reinforced. Some of the findings were (a) the mode of the interresponse time distribution generally occurred near the first lower bound when the maximum reinforcement rate for the two lower bounds was equal; this also frequently occurred even when the reinforcement rate was less for the first lower bound; (b) as is the case with schedules using a single interval of reinforced interresponse times the values of the lower bounds partially determined the location and spread of the distributions; but the particular pair of values used did not seem to influence the effects of the probabilities of reinforcement; (c) although the modal interresponse time was usually at the lower bound of one of the two intervals of reinforced interresponse times, no simple relation existed between either the probability or rate of reinforcement of interresponse times in these two intervals and the location of this mode.  相似文献   

13.
A warning about median reaction time   总被引:4,自引:0,他引:4  
When used with positively skewed reaction time distributions, sample medians tend to over-estimate population medians. The extent of overestimation is related directly to the amount of skew in the reaction time distributions and inversely to the size of the sample over which the median is computed. Simulations indicate that overestimation could approach 50 ms with small samples and highly skewed distributions. An important practical consequence of the bias in median reaction time is that sample medians must not be used to compare reaction times across experimental conditions when there are unequal numbers of trials in the conditions. If medians are used with unequal sample sizes, then the bias may produce an artifactual difference in conditions or conceal a true difference. Some recent studies of cuing and stimulus probability effects provide examples of this potential artifact.  相似文献   

14.
It is commonly claimed that conservative placement of the criterion in signal detection is due to the form of the utility function of money, to conservatism in the estimation of prior probabilities, or to probability matching tendencies. This article shows how conservatism could be caused by a systematic misconception of the shape of the underlying distributions. An experiment is described in which subjects were asked to make posterior probability judgments after performing numerical analogues of signal detection. The posterior probability judgments were radical, i.e., high posterior probabilities were overestimated and low posterior probabilities were underestimated; if this pattern of radical probability estimation reflects the subjects’ understanding of the underlying distributions, it would account for conservative criterion placement.  相似文献   

15.
A random-walk model of visual discrimination is described and applied to reaction time (RT) distributions from three discrete-trial experiments with pigeons. Experiment 1 was a two-choice hue discrimination task with multiple hues. Choice percentages changed with hue discriminability; RTs were shortest for the least and most discriminable stimuli. Experiments 2 and 3 used go/no-go hue discriminations. Blocks of sessions differed in reward probability associated with a variable red stimulus in Experiment 2 and with a constant green stimulus in Experiment 3. Changes in hue had a large effect on response percentage and a small effect on RT; changes in reward shifted RT distributions on the time axis. The "random-walk, pigeon" model applied to these data is closely related to Ratcliff's diffusion model (Ratcliff, 1978; Ratcliff & Rouder, 1998). Simulations showed that stimulus discriminability affected the speed with which evidence accumulated toward a response threshold, in line with comparable effects in human subjects. Reward probability affected bias, modeled as the amount of evidence needed to reach one threshold rather than the other. The effects of reward probability are novel, and their isolation from stimulus effects within the decision process can guide development of a broader model of discrimination.  相似文献   

16.
The relation between variables that modulate the probability and the topography of key pecks was examined using a concurrent variable-interval variable-interval schedule with food and water reinforcers. Measures of response probability (response rates, time allocation) and topography (peck duration, gape amplitude) were obtained in 5 water- and food-deprived pigeons. Key color signaled reinforcer type. During baseline, response rates and time allocations were greater to the food key than to the water key, and food-key pecks had larger gapes and shorter durations. Relative probability measures (for the food key) were increased by prewatering and decreased by prefeeding. Deprivation effects upon topography measures were apparent only when food- and water-key pecks were analyzed separately. Food-key gape amplitudes increased with prewatering and decreased with prefeeding. The clearest effect occurred with prewatering. There were no consistent effects upon water-key gapes. The key color-reinforcer relation was reversed for 3 pigeons to determine how response topography was modulated during the transition from food- to water-key pecks. Reacquisition was faster for the probability than for the topography measures. Analysis of gape-amplitude distributions during reversal indicated that response-form modulation proceeded through the generation of intermediate gape sizes.  相似文献   

17.
Recent studies of nonlinear dynamics of the long-term variability of heart rate have identified nontrivial long-range correlations and scale-invariant power-law characteristics (l/f noise) that were remarkably consistent between individuals and were unrelated to external or environmental stimuli (Meyer et al., 1998a). The present analysis of complex nonstationary heartbeat patterns is based on the sequential application of the wavelet transform for elimination of local polynomial nonstationary behavior and an analytic signal approach by use of the Hilbert transform (Cumulative Variation Amplitude Analysis). The effects of chronic high altitude hypoxia on the distributions and scaling functions of cardiac intervals over 24 hr epochs and 4 hr day/nighttime subepochs were determined from serial heartbeat interval time series of digitized 24 hr ambulatory ECGs recorded in 9 healthy subjects (mean age 34 yrs) at sea level and during a sojourn at high altitude (5,050 m) for 34 days (Ev-K2-CNR Pyramid Laboratory, Sagarmatha National Park, Nepal). The results suggest that there exists a hidden, potentially universal, common structure in the heterogeneous time series. A common scaling function with a stable Gamma distribution defines the probability density of the amplitudes of the fluctuations in the heartbeat interval time series of individual subjects. The appropriately rescaled distributions of normal subjects at sea level demonstrated stable Gamma scaling consistent with a single scaled plot (data collapse). Longitudinal assessment of the rescaled distributions of the 24 hr recordings of individual subjects showed that the stability of the distributions was unaffected by the subject's exposure to a hypobaric (hypoxic) environment. The rescaled distributions of 4 hr subepochs showed similar scaling behavior with a stable Gamma distribution indicating that the common structure was unequivocally applicable to both day and night phases and, furthermore, did not undergo systematic changes in response to high altitude. In contrast, a single function stable over a wide range of time scales was not observed in patients with congestive heart failure or patients after cardiac transplantation. The functional form of the scaling in normal subjects would seem to be attributable to the underlying nonlinear dynamics of cardiac control. The results suggest that the observed Gamma scaling of the distributions in healthy subjects constitutes an intrinsic dynamical property of normal heart function that would not undergo early readjustment or late acclimatization to extrinsic environmental physiological stress, e.g., chronic hypoxia.  相似文献   

18.
Recent studies of nonlinear dynamics of the long-term variability of heart rate have identified nontrivial long-range correlations and scale-invariant power-law characteristics (1/f noise) that were remarkably consistent between individuals and were unrelated to external or environmental stimuli (Meyer et al., 1998a). The present analysis of complex nonstationary heartbeat patterns is based on the sequential application of the wavelet transform for elimination of local polynomial nonstationary behavior and an analytic signal approach by use of the Hilbert transform (Cumulative Variation Amplitude Analysis). The effects of chronic high altitude hypoxia on the distributions and scaling functions of cardiac intervals over 24 hr epochs and 4 hr day/nighttime subepochs were determined from serial heartbeat interval time series of digitized 24 hr ambulatory ECGs recorded in 9 healthy subjects (mean age 34 yrs) at sea level and during a sojourn at high altitude (5,050 m) for 34 days (Ev-K2-CNR Pyramid Laboratory, Sagarmatha National Park, Nepal). The results suggest that there exists a hidden, potentially universal, common structure in the heterogeneous time series. A common scaling function with a stable Gamma distribution defines the probability density of the amplitudes of the fluctuations in the heartbeat interval time series of individual subjects. The appropriately rescaled distributions of normal subjects at sea level demonstrated stable Gamma scaling consistent with a single scaled plot (data collapse). Longitudinal assessment of the rescaled distributions of the 24 hr recordings of individual subjects showed that the stability of the distributions was unaffected by the subject’s exposure to a hypobaric (hypoxic) environment. The rescaled distributions of 4 hr subepochs showed similar scaling behavior with a stable Gamma distribution indicating that the common structure was unequivocally applicable to both day and night phases and, furthermore, did not undergo systematic changes in response to high altitude. In contrast, a single function stable over a wide range of time scales was not observed in patients with congestive heart failure or patients after cardiac transplantation. The functional form of the scaling in normal subjects would seem to be attributable to the underlying nonlinear dynamics of cardiac control. The results suggest that the observed Gamma scaling of the distributions in healthy subjects constitutes an intrinsic dynamical property of normal heart function that would not undergo early readjustment or late acclimatization to extrinsic environmental physiological stress, e.g., chronic hypoxia.  相似文献   

19.
Five pigeons were trained in a concurrent foraging procedure in which reinforcers were occasionally available after fixed times in two discriminated patches. In Part 1 of the experiment, the fixed times summed to 10 s, and were individually varied between 1 and 9 s over five conditions, with the probability of a reinforcer being delivered at the fixed times always .5. In Part 2, both fixed times were 5 s, and the probabilities of food delivery were varied over conditions, always summing to 1.0. In Parts 3 and 4, one fixed time was kept constant (Part 3, 3 s; Part 4, 7 s) while the other fixed time was varied from 1 s to 15 s. Median residence times in both patches increased with increases in the food-arrival times in either patch, but increased considerably more strongly in the patch in which the arrival time was increased. However, when arrival times were very different in the two patches, residence time in the longer arrival-time patch often decreased. Patch residence also increased with increasing probability of reinforcement, but again tended to fall when one probability was much larger than the other. A detailed analysis of residence times showed that these comprised two distributions, one around a shorter mode that remained constant with changes in arrival times, and one around a longer mode that monotonically increased with increasing arrival time. The frequency of shorter residence times appeared to be controlled by the probability of, and arrival time of, reinforcers in the alternative patch. The frequency of longer residence times was controlled directly by the arrival time of reinforcers in a patch, but not by the probability of reinforcers in a patch. The environmental variables that control both staying in a patch and exiting from a patch need to be understood in the study both of timing processes and of foraging.  相似文献   

20.
Atkinson  David  Peijnenburg  Jeanne 《Synthese》1999,118(3):307-328
It is argued that probability should be defined implicitly by the distributions of possible measurement values characteristic of a theory. These distributions are tested by, but not defined in terms of, relative frequencies of occurrences of events of a specified kind. The adoption of an a priori probability in an empirical investigation constitutes part of the formulation of a theory. In particular, an assumption of equiprobability in a given situation is merely one hypothesis inter alia, which can be tested, like any other assumption. Probability in relation to some theories – for example quantum mechanics – need not satisfy the Kolmogorov axioms. To illustrate how two theories about the same system can generate quite different probability concepts, and not just different probabilistic predictions, a team game for three players is described. If only classical methods are allowed, a 75% success rate at best can be achieved. Nevertheless, a quantum strategy exists that gives a 100% probability of winning. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号