首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Five pigeons were trained on pairs of concurrent variable-interval schedules in a switching-key procedure. The arranged overall rate of reinforcement was constant in all conditions, and the reinforcer-magnitude ratios obtained from the two alternatives were varied over five levels. Each condition remained in effect for 65 sessions and the last 50 sessions of data from each condition were analyzed. At a molar level of analysis, preference was described well by a version of the generalized matching law, consistent with previous reports. More local analyses showed that recently obtained reinforcers had small measurable effects on current preference, with the most recently obtained reinforcer having a substantially larger effect. Larger reinforcers resulted in larger and longer preference pulses, and a small preference was maintained for the larger-magnitude alternative even after long inter-reinforcer intervals. These results are consistent with the notion that the variables controlling choice have both short- and long-term effects. Moreover, they suggest that control by reinforcer magnitude is exerted in a manner similar to control by reinforcer frequency. Lower sensitivities when reinforcer magnitude is varied are likely to be due to equal frequencies of different sized preference pulses, whereas higher sensitivities when reinforcer rates are varied might result from changes in the frequencies of different sized preference pulses.  相似文献   

2.
Reinforcement magnitude and pausing on progressive-ratio schedules   总被引:4,自引:3,他引:1       下载免费PDF全文
Rats responded under progressive-ratio schedules for sweetened milk reinforcers; each session ended when responding ceased for 10 min. Experiment 1 varied the concentration of milk and the duration of postreinforcement timeouts. Postreinforcement pausing increased as a positively accelerated function of the size of the ratio, and the rate of increase was reduced as a function of concentration and by timeouts of 10 s or longer. Experiment 2 varied reinforcement magnitude within sessions (number of dipper operations per reinforcer) in conjunction with stimuli correlated with the upcoming magnitude. In the absence of discriminative stimuli, pausing was longer following a large reinforcer than following a small one. Pauses were reduced by a stimulus signaling a large upcoming reinforcer, particularly at the highest ratios, and the animals tended to quit responding when the past reinforcer was large and the stimulus signaled that the next one would be small. Results of both experiments revealed parallels between responding under progressive-ratio schedules and other schedules containing ratio contingencies. Relationships between pausing and magnitude suggest that ratio pausing is under the joint control of inhibitory properties of the past reinforcer and excitatory properties of stimuli correlated with the upcoming reinforcer, rather than under the exclusive control of either factor alone.  相似文献   

3.
Pigeons pecked a key under two-component multiple variable-ratio schedules that offered 8-s or 2-s access to grain. Postreinforcement pausing and the rates of responding following the pause (run rates) in each component were measured as a function of variable-ratio size and the size of the lowest ratio in the configuration of ratios comprising each schedule. In one group of subjects, variable-ratio size was varied while the size of the lowest ratio was held constant. In a second group, the size of the lowest ratio was varied while variable-ratio size was held constant. For all subjects, the mean duration of postreinforcement pausing increased in the 2-s component but not in the 8-s component. Postreinforcement pauses increased with increases in variable-ratio size (Group 1) and with increases in the lowest ratio (Group 2). In both groups, run rates were slightly higher in the 8-s component than in the 2-s component. Run rates decreased slightly as variable-ratio size increased, but were unaffected by increases in the size of the lowest ratio. These results suggest that variable-ratio size, the size of the lowest ratio, and reinforcer magnitude interact to determine the duration of postreinforcement pauses.  相似文献   

4.
Pigeons pecked a key under two-component multiple variable-ratio schedules that offered 8-s or 2-s access to grain. Phase 1 assessed the effects of differences in reinforcer magnitude on postreinforcement pausing, as a function of ratio size. In Phase 2, postreinforcement pausing and the first five interresponse times in each ratio were measured as a function of differences in reinforcer magnitude under equal variable-ratio schedules consisting of different configurations of individual ratios. Rates were also calculated exclusive of postreinforcement pause times in both phases. The results from Phase 1 showed that as ratio size increased, the differences in pausing educed by unequal reinforcer magnitudes also increased. The results of Phase 2 showed that the effects of reinforcer magnitude on pausing and IRT durations were a function of schedule configuration. Under one configuration, in which the smallest ratio was a fixed-ratio 1, pauses were unaffected by magnitude but the first five interresponse times were affected. Under the other configuration, in which the smallest ratio was a fixed-ratio 7, pauses were affected by reinforcer magnitude but the first five interresponse times were not. The effect of each configuration seemed to be determined by the value of the smallest individual ratio. Rates calculated exclusive of postreinforcement pause times were, in general, directly related to reinforcer magnitude, and the relation was shown to be a function of schedule configuration.  相似文献   

5.
Seven rats responding under fixed-ratio or variable-ratio schedules of food reinforcement had continuous access to a drinking tube inserted into the operant chamber. Under different conditions they could drink either tap water or one of two saccharin solutions. In a baseline condition, the drinking bottle was empty. Preratio pausing was observed with both schedules, more so with the fixed-ratio than the variable-ratio schedule, and increasing the concentration of the saccharin solution increased the duration of pausing. Comparisons with baseline performances revealed that the additional pausing was largely, but not entirely, spent drinking. The results support the view that pausing under ratio schedules is a consequence of competition between the scheduled reinforcer and alternative reinforcers that also are available within the experimental environment.  相似文献   

6.
We conducted three experiments to reproduce and extend Perone and Courtney's (1992) study of pausing at the beginning of fixed-ratio schedules. In a multiple schedule with unequal amounts of food across two components, they found that pigeons paused longest in the component associated with the smaller amount of food (the lean component), but only when it was preceded by the rich component. In our studies, adults with mild intellectual disabilities responded on a touch-sensitive computer monitor to produce money. In Experiment 1, the multiple-schedule components differed in both response requirement and reinforcer magnitude (i.e., the rich component required fewer responses and produced more money than the lean component). Effects shown with pigeons were reproduced in all 7 participants. In Experiment 2, we removed the stimuli that signaled the two schedule components, and participants' extended pausing was eliminated. In Experiment 3, to assess sensitivity to reinforcer magnitude versus fixed-ratio size, we presented conditions with equal ratio sizes but disparate magnitudes and conditions with equal magnitudes but disparate ratio sizes. Sensitivity to these manipulations was idiosyncratic. The present experiments obtained schedule control in verbally competent human participants and, despite procedural differences, we reproduced findings with animal participants. We showed that pausing is jointly determined by past conditions of reinforcement and stimuli correlated with upcoming conditions.  相似文献   

7.
Four pigeons had discrimination training that required the choice of a left side-key after completing a fixed-ratio 10 on the center key, and a right side-key choice after fixed-ratio 20. Correct choices were reinforced on various fixed-interval, fixed-ratio, random-interval, and random-ratio schedules. When performance was examined across successive 15-second intervals (fixed-interval and fixed-ratio schedules) accuracy was high in the first 15-second interval, decreased in one or several of the next 15-second intervals, and then increased again as reinforcement was approached. When performance was examined across correct trials on fixed-interval and fixed-ratio schedules, accuracy was lowest immediately after reinforcement, followed by a systematic increase in accuracy as the number of correct choices increased. These patterns were due primarily to errors on fixed-ratio 20 trials. Systematic accuracy patterns did not occur on random-interval or random-ratio schedules. The results indicate that when choice patterns differed on fixed-interval and fixed-ratio schedules, the differences were due to the method of data analysis.  相似文献   

8.
Pigeons pecked a key, producing food reinforcement on fixed-ratio (FR) schedules requiring 50, 100, or 150 responses. In each session, 30-second timeouts were inserted before a random half of the FR trials, whereas the other trials began immediately after reinforcement. In general, preratio pauses were shorter on trials preceded by timeouts. On these trials, the probability of a first response tended to be highest in the first 20 seconds of the trials, suggesting that the shorter pauses were the result of transient behavioral contrast. Direct observations and analyses of interresponse times (IRTs) after the preratio pause indicated that IRTs could be grouped into three categories: (1) IRTs of about .1 second, which were produced by small head movements in the vicinity of the key; (2) IRTs of about .3 second, which were produced by distinct pecking motions; and (3) IRTs greater than .5 second, which were accompanied by pausing or movements away from the key. At all ratio sizes, as a subject progressed through a trial, the probability of a long IRT decreased, whereas the probability of an intermediate IRT usually increased at first and then decreased. The probability of a short IRT increased monotonically across a trial. The results show that responding changes systematically as a subject progresses through a ratio on an FR schedule. Some characteristics of performance varied as functions of the absolute size of the response requirement, whereas others appeared to depend on the relative location within a ratio (i.e., the proportion of the ratio completed at a given moment).  相似文献   

9.
Hens responded under multiple fixed‐ratio schedules with equal response requirements and either a 1‐s or a 6‐s reinforcer. Upcoming reinforcer size was indicated by key color. Components were presented in a quasirandom series so that all four component transitions occurred. Postreinforcement pauses were affected by the upcoming and preceding reinforcer size, with longer pauses after large reinforcers followed by small reinforcers than when followed by large ones, and longer pauses after small reinforcers that were followed by small reinforcers rather than large ones. Pauses increased with fixed‐ratio size and the effects of reinforcer size were larger the larger the ratio. When reinforcer size was not signaled—mixed fixed‐ratio schedules—pauses were shorter after small than after large reinforcers. Signalling the upcoming reinforcer attenuated the effect of the previous reinforcer size on pause duration when small was followed by small and when either small or large by large, but enhanced the effect when large was followed by small. There was no effect of reinforcer size on pause duration when single fixed‐ratio schedules were arranged. The effects of reinforcer size on pauses depends on the size and range of the fixed ratios as well as the exact procedures used in the study.  相似文献   

10.
One assumption of the matching approach to choice is that different independent variables control choice independently of each other. We tested this assumption for reinforcer rate and magnitude in an extensive parametric experiment. Five pigeons responded for food reinforcement on switching-key concurrent variable-interval variable-interval schedules. Across conditions, the ratios of reinforcer rates and of reinforcer magnitudes on the two alternatives were both manipulated. Control by each independent variable, as measured by generalized-matching sensitivity, changed significantly with the ratio of the other independent variable. Analyses taking the model-comparison approach, which weighs improvement in goodness-of-fit against increasing number of free parameters, were inconclusive. These analyses compared a model assuming constant sensitivity to magnitude across all reinforcer-rate ratios with two alternative models. One of those alternatives allowed sensitivity to magnitude to vary freely across reinforcer-rate ratios, and was less efficient than the common-sensitivity model for all pigeons, according to the Schwarz-Bayes information criterion. The second alternative model constrained sensitivity to magnitude to be equal for pairs of reinforcer-rate ratios that deviated from unity by proportionately equal amounts but in opposite directions. This model was more efficient than the common-magnitude-sensitivity model for 2 of the pigeons, but not for the other 3. An analysis of variance, carried out independently of the generalized-matching analysis, also showed a significant interaction between the effects of reinforcer rate and reinforcer magnitude on choice. On balance, these results suggest that the assumption of independence inherent in the matching approach cannot be maintained. Relative reinforcer rates and magnitudes do not control choice independently.  相似文献   

11.
Six pigeons were trained on concurrent variable-interval schedules with unequal reinforcer durations for the two responses. The schedules arranged on the two keys were kept equal while they were varied in absolute size. As the overall reinforcer rate was increased, both response-allocation and time-allocation measures of choice showed a trend toward indifference, and measures of sensitivity to reinforcer-duration ratios significantly decreased. Recent reports have shown that the generalized matching law cannot describe the changes in behavior allocation under constant delay-, duration-, or rate-ratios when changes are made in the absolute levels of each of these variables. The present results complement these findings by demonstrating that the concatenated generalized matching law cannot describe the interactions of two reinforcer variables on behavior allocation.  相似文献   

12.
Seven pigeons were studied in two experiments in which key pecks were reinforced under a second-order schedule wherein satisfaction of variable-interval schedule requirements produced food or a brief stimulus. In the second part of each session, responses produced only the brief stimulus according to a variable-interval schedule (food extinction). For the 4 pigeons in Experiment 1, the response key was red throughout the session. In separate phases, the brief stimulus was either paired with food, not paired with food, or not presented during extinction. d-Amphetamine (0.3 to 10.0 mg/kg) dose-dependently reduced food-maintained responding during the first part of the session and, at intermediate dosages, increased responding during the extinction portion of the session. The magnitude of these increases, however, did not consistently depend on whether the brief stimulus was paired, not paired, or not presented. It was also true that under nondrug conditions, response rates during extinction did not differ reliably depending on pairing operations for the brief stimulus. In Experiment 2, 3 different pigeons responded under a procedure wherein the key was red in the component with food presentations and blue in the extinction component (i.e., multiple schedule). Again, d-amphetamine produced dose-related decreases in responding during the first part of a session and increases in responding in the second part of the session. These increases, however, were related to the pairing operations; larger increases were observed when the brief stimulus was paired with food than when it was not or when it was not presented at all. Under nondrug conditions, the paired brief stimulus controlled higher response rates during extinction than did a nonpaired stimulus or no stimulus. These findings suggest that d-amphetamine can enhance the efficacy of conditioned reinforcers, and that this effect may be more robust if conditioned reinforcers occur in the context of a signaled period of extinction.  相似文献   

13.
14.
We investigated the effects that sequences of reinforcers obtained from the same response key have on local preference in concurrent variable-interval schedules with pigeons as subjects. With an overall reinforcer rate of one every 27 s, on average, reinforcers were scheduled dependently, and the probability that a reinforcer would be arranged on the same alternative as the previous reinforcer was manipulated. Throughout the experiment, the overall reinforcer ratio was 1:1, but across conditions we varied the average lengths of same-key reinforcer sequences by varying this conditional probability from 0 to 1. Thus, in some conditions, reinforcer locations changed frequently, whereas in others there tended to be very long sequences of same-key reinforcers. Although there was a general tendency to stay at the just-reinforced alternative, this tendency was considerably decreased in conditions where same-key reinforcer sequences were short. Some effects of reinforcers are at least partly to be accounted for by their signaling subsequent reinforcer locations.  相似文献   

15.
Six pigeons were trained in a procedure in which sessions included seven unsignaled components, each offering two pecking keys, and each providing a potentially different reinforcer ratio between the two keys. Across conditions, various combinations of reinforcer ratios and reinforcer-magnitude ratios were used to create unequal reinforcer distributions between the two alternatives when averaged across a session. The results extended previous research using the same basic procedure that had included only reinforcer distributions symmetrical around 1:1. Data analyses suggested that the variables controlling choice operated at a number of levels: First, individual reinforcers had local effects on choice; second, sequences of successive reinforcers obtained at the same alternative (continuations) had cumulative effects; and, third, when these sequences themselves occurred with greater frequency, their effects further cumulated. A reinforcer obtained at the other alternative following a sequence of continuations (a discontinuation) had a large effect and apparently reset choice to levels approximating the sessional reinforcer ratio.  相似文献   

16.
17.
Consideration of reinforcer magnitude may be important for maximizing the efficacy of treatment for problem behavior. Nonetheless, relatively little is known about children's preferences for different magnitudes of social reinforcement or the extent to which preference is related to differences in reinforcer efficacy. The purpose of the current study was to evaluate the relations among reinforcer magnitude, preference, and efficacy by drawing on the procedures and results of basic experimentation in this area. Three children who engaged in problem behavior that was maintained by social positive reinforcement (attention, access to tangible items) participated. Results indicated that preference for different magnitudes of social reinforcement may predict reinforcer efficacy and that magnitude effects may be mediated by the schedule requirement.  相似文献   

18.
The present study measured the effects of stimulus and reinforcer variations on pigeons' behavior in two different choice procedures. Two intensities of white light were presented as the stimuli on the main key in a switching-key concurrent schedule and as the sample stimuli in a signal-detection procedure. Under both procedures, the scheduled rate of reinforcement was varied across conditions to produce various ratios of obtained reinforcement. These ratios were obtained for seven pairs of light intensities. In the concurrent schedules, the effects of reinforcer-ratio variations were positively correlated with the physical disparity between the two light intensities. In the signal-detection procedure, changes in the reinforcer ratio produced greater effects on performance when stimulus disparity was very low or very high compared to those found at intermediate levels of stimulus disparity. This discrepancy creates a dilemma for existing behavioral models of signal-detection performance.  相似文献   

19.
Six pigeons were trained in experimental sessions that arranged six or seven components with various concurrent-schedule reinforcer ratios associated with each. The order of the components was determined randomly without replacement. Components lasted until the pigeons had received 10 reinforcers, and were separated by 10-s blackout periods. The component reinforcer ratios arranged in most conditions were 27:1, 9:1, 3:1, 1:1, 1:3, 1:9 and 1:27; in others, there were only six components, three of 27:1 and three of 1:27. In some conditions, each reinforcement ratio was signaled by a different red-yellow flash frequency, with the frequency perfectly correlated with the reinforcer ratio. Additionally, a changeover delay was arranged in some conditions, and no changeover delay in others. When component reinforcer ratios were signaled, sensitivity to reinforcement values increased from around 0.40 before the first reinforcer in a component to around 0.80 before the 10th reinforcer. When reinforcer ratios were not signaled, sensitivities typically increased from zero to around 0.40. Sensitivity to reinforcement was around 0.20 lower in no-changeover-delay conditions than in changeover-delay conditions, but increased in the former after exposure to changeover delays. Local analyses showed that preference was extreme towards the reinforced alternative for the first 25 s after reinforcement in changeover-delay conditions regardless of whether components were signaled or not. In no-changeover-delay conditions, preference following reinforcers was either absent, or, following exposure to changeover delays, small. Reinforcers have both local and long-term effects on preference. The former, but not the latter, is strongly affected by the presence of a changeover delay. Stimulus control may be more closely associated with longer-term, more molar, reinforcer effects.  相似文献   

20.
Discrete-trial choice in pigeons: Effects of reinforcer magnitude   总被引:5,自引:5,他引:0       下载免费PDF全文
The preference of pigeons for large reinforcers which occasionally followed a response versus small reinforcers which invariably followed a response was studied in a discrete-trial situation. Two differently colored keys were associated with the two reinforcement alternatives, and preference was measured as the proportion of choice trials on which the key associated with uncertain reinforcement was pecked. A combination of choice and guidance trials insured that received distributions of reinforcement equalled the scheduled distributions. For five of six subjects, preference for the uncertain reinforcer appeared to be a linear function of the magnitude of the certain reinforcer. In addition, there was greater preference for the response alternative associated with uncertain reinforcement than would be expected on the basis of net reinforcer value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号