首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Two experiments studied the phenomenon of procrastination, in which pigeons chose a larger, more delayed response requirement over a smaller, more immediate response requirement. The response requirements were fixed-interval schedules that did not lead to an immediate food reinforcer, but that interrupted a 55-s period in which food was delivered at random times. The experiments used an adjusting-delay procedure in which the delay to the start of one fixed-interval requirement was varied over trials to estimate an indifference point--a delay at which the two alternatives were chosen about equally often. Experiment 1 found that as the delay to a shorter fixed-interval requirement was increased, the adjusting delay to a longer fixed-interval requirement also increased, and the rate of increase depended on the duration of the longer fixed-interval requirement. Experiment 2 found a strong preference for a fixed delay of 10 s to the start of a fixed-interval requirement compared to a mixed delay of either 0 or 20 s. The results help to distinguish among different equations that might describe the decreasing effectiveness of a response requirement with increasing delay, and they suggest that delayed reinforcers and delayed response requirements have symmetrical but opposite effects on choice.  相似文献   

2.
Pigeons trained on cyclic-interval schedules adjust their postfood pause from interval to interval within each experimental session. But on regular fixed-interval schedules, many sessions at a given parameter value are usually necessary before the typical fixed-interval "scallop" appears. In the first case, temporal control appears to act from one interfood interval to the next; in the second, it appears to act over hundreds of interfood intervals. The present experiments look at the intermediate case: daily variation in schedule parameters. In Experiments 1 and 2 we show that pauses proportional to interfood interval develop on short-valued response-initiated-delay schedules when parameters are changed daily, that additional experience under this regimen leads to little further improvement, and that pauses usually change as soon as the schedule parameter is changed. Experiment 3 demonstrates identical waiting behavior on fixed-interval and response-initiated-delay schedules when the food delays are short (less than 20 s) and conditions are changed daily. In Experiment 4 we show that daily intercalation prevents temporal control when interfood intervals are longer (25 to 60 s). The results of Experiment 5 suggest that downshifts in interfood interval produce more rapid waiting-time adjustments than upshifts. These and other results suggest that the effects of short interfood intervals seem to be more persistent than those of long intervals.  相似文献   

3.
Key pecking of 4 pigeons was maintained under a multiple variable-interval 20-s variable-interval 120-s schedule of food reinforcement. When rates of key pecking were stable, a 5-s unsignaled, nonresetting delay to reinforcement separated the first peck after an interval elapsed from reinforcement in both components. Rates of pecking decreased substantially in both components. When rates were stable, the situation was changed such that the peck that began the 5-s delay also changed the color of the keylight for 0.5 s (i.e., the delay was briefly signaled). Rates increased to near-immediate reinforcement levels. In subsequent conditions, delays of 10 and 20 s, still briefly signaled, were tested. Although rates of key pecking during the component with the variable-interval 120-s schedule did not change appreciably across conditions, rates during the variable-interval 20-s component decreased greatly in 1 pigeon at the 10-s delay and decreased in all pigeons at the 20-s delay. In a control condition, the variable-interval 20-s schedule with 20-s delays was changed to a variable-interval 35-s schedule with 5-s delays, thus equating nominal rates of reinforcement. Rates of pecking increased to baseline levels. Rates of pecking, then, depended on the value of the briefly signaled delay relative to the programmed interfood times, rather than on the absolute delay value. These results are discussed in terms of similar findings in the literature on conditioned reinforcement, delayed matching to sample, and classical conditioning.  相似文献   

4.
Although it has repeatedly been demonstrated that pigeons, as well as other species, will often choose a variable schedule of reinforcement over an equivalent (or even richer) fixed schedule, the exact nature of that controlling relation has yet to be fully assessed. In this study pigeons were given repeated choices between concurrently available fixed-ratio and variable-ratio schedules. The fixed-ratio requirement (30 responses) was constant throughout the experiment, whereas the distribution of individual ratios making up the variable-ratio schedule changed across phases: The smallest and largest of these components were varied gradually, with the mean variable-ratio requirement constant at 60 responses. The birds' choices of the variable-ratio schedule tracked the size of the smallest variable-ratio component. A minimum variable-ratio component at or near 1 produced strong preference for the variable-ratio schedule, whereas increases in the minimum variable-ratio component resulted in reduced preference for the variable-ratio schedule. The birds' behavior was qualitatively consistent with Mazur's (1984) hyperbolic model of delayed reinforcement and could be described as approximate maximizing with respect to reinforcement value.  相似文献   

5.
In Experiment 1 with rats, a left lever press led to a 5-s delay and then a possible reinforcer. A right lever press led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials to estimate an indifference point, or a delay at which the two alternatives were chosen about equally often. Indifference points increased as the probability of reinforcement for the left lever decreased. In some conditions with a 20% chance of food, a light above the left lever was lit during the 5-s delay on all trials, but in other conditions, the light was only lit on those trials that ended with food. Unlike previous results with pigeons, the presence or absence of the delay light on no-food trials had no effect on the rats' indifference points. In other conditions, the rats showed less preference for the 20% alternative when the time between trials was longer. In Experiment 2 with rats, fixed-interval schedules were used instead of simple delays, and the presence or absence of the fixed-interval requirement on no-food trials had no effect on the indifference points. In Experiment 3 with rats and Experiment 4 with pigeons, the animals chose between a fixed-ratio 8 schedule that led to food on 33% of the trials and an adjusting-ratio schedule with food on 100% of the trials. Surprisingly, the rats showed less preference for the 33% alternative in conditions in which the ratio requirement was omitted on no-food trials. For the pigeons, the presence or absence of the ratio requirement on no-food trials had little effect. The results suggest that there may be differences between rats and pigeons in how they respond in choice situations involving delayed and probabilistic reinforcers.  相似文献   

6.
Nonstable concurrent choice in pigeons   总被引:10,自引:9,他引:1       下载免费PDF全文
Six pigeons were trained on concurrent variable-interval schedules in which the arranged reinforcer ratios changed from session to session according to a 31-step pseudorandom binary sequence. This procedure allows a quantitative analysis of the degree to which performance in an experimental session is affected by conditions in previous sessions. Two experiments were carried out. In each, the size of the reinforcer ratios arranged between the two concurrent schedules was varied between 31-step conditions. In Experiment 1, the concurrent schedules were arranged independently, and in Experiment 2 they were arranged nonindependently. An extended form of the generalized matching law described the relative contribution of past and present events to present-session behavior. Total performance in sessions was mostly determined by the reinforcer ratio in that session and partially by reinforcers that had been obtained in previous sessions. However, the initial exposure to the random sequence produced a lower sensitivity to current-session reinforcers but no difference in overall sensitivity to reinforcement. There was no evidence that the size of the reinforcer ratios available on the concurrent schedules affected either overall sensitivity to reinforcement or the sensitivity to reinforcement in the current session. There was also no evidence of any different performance between independent and nonindependent scheduling. Because of these invariances, this experiment validates the use of the pseudorandom sequence for the fast determination of sensitivity to reinforcement.  相似文献   

7.
In concurrent-chains schedules, pigeons prefer terminal links that provide two keys correlated with reinforcers (free choice) over those that provide only one key (forced choice), terminal-link reinforcement rates being equal. With same-size keys, free choice provides a larger area available for pecking. Preferences were examined using terminal links that differed in key number only (one or two) or key size only (small and medium or medium and large), or that equated the area of the two free-choice keys with that of the forced-choice key. Medium (standard) keys were typically preferred to small keys, but indifference was typically obtained between medium and large keys. The size preference usually overrode free-choice preference with one medium key pitted against two small keys, but free-choice preference was reliably observed with one large key pitted against two medium keys. In other words, preferences were a joint function of key number and key area, implying that free-choice preference is not reducible to preference for larger key areas. Free-choice preference requires separate keys rather than larger areas; the relevant behavioral units are the discriminated operants correlated with each terminal-link key rather than classes defined by topographical features such as area or perimeter.  相似文献   

8.
9.
Pigeons were exposed to self-control procedures that involved illumination of light-emitting diodes (LEDs) as a form of token reinforcement. In a discrete-trials arrangement, subjects chose between one and three LEDs; each LED was exchangeable for 2-s access to food during distinct posttrial exchange periods. In Experiment 1, subjects generally preferred the immediate presentation of a single LED over the delayed presentation of three LEDs, but differences in the delay to the exchange period between the two options prevented a clear assessment of the relative influence of LED delay and exchange-period delay as determinants of choice. In Experiment 2, in which delays to the exchange period from either alternative were equal in most conditions, all subjects preferred the delayed three LEDs more often than in Experiment-1. In Experiment 3, subjects preferred the option that resulted in a greater amount of food more often if the choices also produced LEDs than if they did not. In Experiment 4, preference for the delayed three LEDs was obtained when delays to the exchange period were equal, but reversed in favor of an immediate single LED when the latter choice also resulted in quicker access to exchange periods. The overall pattern of results suggests that (a) delay to the exchange period is a more critical determinant of choice than is delay to token presentation; (b) tokens may function as conditioned reinforcers, although their discriminative properties may be responsible for the self-control that occurs under token reinforcer arrangements; and (c) previously reported differences in the self-control choices of humans and pigeons may have resulted at least in part from the procedural conventions of using token reinforcers with human subjects and food reinforcers with pigeon subjects.  相似文献   

10.
Effects of delayed conditioned reinforcement in chain schedules.   总被引:3,自引:3,他引:0  
The contingency between responding and stimulus change on a chain variable-interval 33-s, variable-interval 33-s, variable-interval 33-s schedule was weakened by interposing 3-s delays between either the first and second or the second and third links. No stimulus change signaled the delay interval and responses could occur during it, so the obtained delays were often shorter than the scheduled delay. When the delay occurred after the initial link, initial-link response rates decreased by an average of 77% with no systematic change in response rates in the second or third links. Response rates in the second link decreased an average of 59% when the delay followed that link, again with little effect on response rates in the first or third links. Because the effect of delaying stimulus change was comparable to the effect of delaying primary reinforcement in a simple variable-interval schedule, and the effect of the unsignaled delay was specific to the link in which the delay occurred, the results provide strong evidence for the concept of conditioned reinforcement.  相似文献   

11.
In Experiment 1, pigeons' pecks on a green key led to a 5-s delay with green houselights, and then food was delivered on 20% (or, in other conditions, 50%) of the trials. Pecks on a red key led to an adjusting delay with red houselights, and then food was delivered on every trial. The adjusting delay was used to estimate indifference points: delays at which the two alternatives were chosen about equally often. Varying the presence or absence of green houselights during the delays that preceded possible food deliveries had large effects on choice. In contrast, varying the presence of the green or red houselights in the intertrial intervals had no effects on choice. In Experiment 2, pecks on the green key led to delays of either 5 s or 30 s with green houselights, and then food was delivered on 20% of the trials. Varying the duration of the green houselights on nonreinforced trials had no effect on choice. The results suggest that the green houselights served as a conditioned reinforcer at some times but not at others, depending on whether or not there was a possibility that a primary reinforcer might be delivered. Given this interpretation of what constitutes a conditioned reinforcer, most of the results were consistent with the view that the strength of a conditioned reinforcer is inversely related to its duration.  相似文献   

12.
Four pigeons were exposed to second-order schedules of token reinforcement, with stimulus lights serving as token reinforcers. Tokens were earned according to a fixed-ratio (token-production) schedule, with the opportunity to exchange tokens for food (exchange period) occurring after a fixed number had been produced (exchange-production ratio). The token-production and exchange-production ratios were manipulated systematically across conditions. Response rates varied inversely with the token-production ratio at each exchange-production ratio. Response rates also varied inversely with the exchange-production ratio at each token-production ratio, particularly at the higher token-production ratios. At higher token-production and exchange-production ratios, response rates increased in token-production segments closer to exchange periods and food. Some conditions were conducted in a closed economy, in which the pigeons earned all their daily ration of food within the session. Relative to comparable open-economy conditions, response rates in the closed economy were less affected by changes in token-production ratio, resulting in higher levels of food intake and body weight. Some of the results are consistent with the economic concept of unit price, a cost-benefit ratio comprised of responses per unit of food delivery, but most are well accounted for by a consideration of the number of responses required to produce exchange periods, without regard to the amount of reinforcement available during those exchange periods.  相似文献   

13.
Dynamics of waiting in pigeons   总被引:2,自引:2,他引:0       下载免费PDF全文
Two experiments used response-initiated delay schedules to test the idea that when food reinforcement is available at regular intervals, the time an animal waits before its first operant response (waiting time) is proportional to the immediately preceding interfood interval (linear waiting; Wynne & Staddon, 1988). In Experiment 1 the interfood intervals varied from cycle to cycle according to one of four sinusoidal sequences with different amounts of added noise. Waiting times tracked the input cycle in a way which showed that they were affected by interfood intervals earlier than the immediately preceding one. In Experiment 2 different patterns of long and short interfood intervals were presented, and the results implied that waiting times are disproportionately influenced by the shortest of recent interfood intervals. A model based on this idea is shown to account for a wide range of results on the dynamics of timing behavior.  相似文献   

14.
Pigeons chose between two schedules of food presentation, a fixed-interval schedule and a progressive-interval schedule that began at 0 s and increased by 20 s with each food delivery provided by that schedule. Choosing one schedule disabled the alternate schedule and stimuli until the requirements of the chosen schedule were satisfied, at which point both schedules were again made available. Fixed-interval duration remained constant within individual sessions but varied across conditions. Under reset conditions, completing the fixed-interval schedule not only produced food but also reset the progressive interval to its minimum. Blocks of sessions under the reset procedure were interspersed with sessions under a no-reset procedure, in which the progressive schedule value increased independent of fixed-interval choices. Median points of switching from the progressive to the fixed schedule varied systematically with fixed-interval value, and were consistently lower during reset than during no-reset conditions. Under the latter, each subject's choices of the progressive-interval schedule persisted beyond the point at which its requirements equaled those of the fixed-interval schedule at all but the highest fixed-interval value. Under the reset procedure, switching occurred at or prior to that equality point. These results qualitatively confirm molar analyses of schedule preference and some versions of optimality theory, but they are more adequately characterized by a model of schedule preference based on the cumulated values of multiple reinforcers, weighted in inverse proportion to the delay between the choice and each successive reinforcer.  相似文献   

15.
In three experiments, pigeons were used to examine the independent effects of two normally confounded delays to reinforcement associated with changing between concurrently available variable-interval schedules of reinforcement. In Experiments 1 and 2, combinations of changeover-delay durations and fixed-interval travel requirements were arranged in a changeover-key procedure. The delay from a changeover-produced stimulus change to a reinforcer was varied while the delay between the last response on one alternative and a reinforcer on the other (the total obtained delay) was held constant. Changeover rates decreased as a negative power function of the total obtained delay. The delay between a changeover-produced stimulus change had a small and inconsistent effect on changeover rates. In Experiment 3, changeover delays and fixed-interval travel requirements were arranged independently. Changeover rates decreased as a negative power function of the total obtained delay despite variations in the delay from a change in stimulus conditions to a reinforcer. Periods of high-rate responding following a changeover, however, were higher near the end of the delay from a change in stimulus conditions to a reinforcer. The results of these experiments suggest that the effects of changeover delays and travel requirements primarily result from changes in the delay between a response at one alternative and a reinforcer at the other, but the pattern of responding immediately after a changeover depends on the delay from a changeover-produced change in stimulus conditions to a reinforcer.  相似文献   

16.
17.
Choice, relative reinforcer duration, and the changeover ratio   总被引:4,自引:4,他引:0       下载免费PDF全文
Relative reinforcer duration was varied in concurrent schedules with a fixed-ratio four changeover requirement. The schedule in effect after each reinforcer was randomly chosen. For all three pigeons, relative response rates overmatched relative reinforcer durations. Time allocation was less extreme and, on the average, matched relative reinforcer duration. In a subsequent manipulation, the level of preference was shown to depend on the size of the changeover requirement. These results are similar to those from related unequal reinforcement-frequency procedures.  相似文献   

18.
Signal functions in delayed reinforcement   总被引:4,自引:4,他引:0       下载免费PDF全文
Three experiments were conducted with pigeons to examine the role of the signal in delay-of-reinforcement procedures. In the first, a blackout accompanying a period of nonreinforcement increased key-peck response rates maintained by immediate reinforcement. The effects of dissociating the blackout from the delay interval were examined in the second experiment. In three conditions, blackouts and unsignaled delays were negatively correlated or occurred randomly with respect to one another. A signaled delay and an unsignaled delay that omitted the blackouts were studied in two other conditions. All delay-of-reinforcement conditions generally produced response rates lower than those produced by immediate reinforcement. Signaled delays maintained higher response rates than did any of the various unsignaled-delay conditions, with or without dissociated blackouts. The effects of these latter conditions did not differ systematically from one another. The final experiment showed that response rates varied as a function of the frequency with which a blackout accompanied delay intervals. By eliminating a number of methodological difficulties present in previous delay-of-reinforcement experiments, these results suggest the importance of the signal in maintaining responding during delay-of-reinforcement procedures and, conversely, the importance of the delay interval in decreasing responding.  相似文献   

19.
The effects of delayed reinforcement on free-operant responding   总被引:1,自引:1,他引:0       下载免费PDF全文
In previous studies of delayed reinforcement, response rate has been found to vary inversely with the response-reinforcer interval. However, in all of these studies the independent variable, response-reinforcer time, was confounded with the number of reinforcers presented in a fixed period of time (reinforcer frequency). In the present study, the frequency of available reinforcers was held constant, while temporal separation between response and reinforcer was independently manipulated. A repeating time cycle, T, was divided into two alternating time periods, tD and tΔ. The first response in tD was reinforced at the end of the prevailing T cycle and extinction prevailed in tΔ. Two placements for tD were defined, an early tD placement in which tD precedes tΔ and a late tD placement in which tD follows tΔ. The duration of the early and late tD was systematically decreased from 30 seconds (i.e., tD = T) to 0.1 second. Manipulation of tD placement and duration controlled the temporal separation between response and reinforcement, but it did not affect the frequency of programmed reinforcers, which was 1/T. The results show that early and late tD placements of equal duration have similar overall effects upon response rate, reinforcer frequency, responses per reinforcer, and obtained response-reinforcer temporal separation. A stepwise regression analysis using log response rate as the dependent variable showed that the obtained delay was a significant first-step variable for six of eight subjects, with obtained reinforcer frequency significant for the remaining two subjects.  相似文献   

20.
In a discrete-trials procedure with pigeons, a response on a green key led to a 4-s delay (during which green houselights were lit) and then a reinforcer might or might not be delivered. A response on a red key led to a delay of adjustable duration (during which red houselights were lit) and then a certain reinforcer. The delay was adjusted so as to estimate an indifference point--a duration for which the two alternatives were equally preferred. Once the green key was chosen, a subject had to continue to respond on the green key until a reinforcer was delivered. Each response on the green key, plus the 4-s delay that followed every response, was called one "link" of the green-key schedule. Subjects showed much greater preference for the green key when the number of links before reinforcement was variable (averaging four) than when it was fixed (always exactly four). These findings are consistent with the view that probabilistic reinforcers are analogous to reinforcers delivered after variable delays. When successive links were separated by 4-s or 8-s "interlink intervals" with white houselights, preference for the probabilistic alternative decreased somewhat for 2 subjects but was unaffected for the other 2 subjects. When the interlink intervals had the same green houselights that were present during the 4-s delays, preference for the green key decreased substantially for all subjects. These results provided mixed support for the view that preference for a probabilistic reinforcer is inversely related to the duration of conditioned reinforcers that precede the delivery of food.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号