首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments measured pigeons' choices between probabilistic reinforcers and certain but delayed reinforcers. In Experiment 1, a peck on a red key led to a 5-s delay and then a possible reinforcer (with a probability of .2). A peck on a green key led to a certain reinforcer after an adjusting delay. This delay was adjusted over trials so as to estimate an indifference point, or a duration at which the two alternatives were chosen about equally often. In all conditions, red houselights were present during the 5-s delay on reinforced trials with the probabilistic alternative, but the houselight colors on nonreinforced trials differed across conditions. Subjects showed a stronger preference for the probabilistic alternative when the houselights were a different color (white or blue) during the delay on nonreinforced trials than when they were red on both reinforced and nonreinforced trials. These results supported the hypothesis that the value or effectiveness of a probabilistic reinforcer is inversely related to the cumulative time per reinforcer spent in the presence of stimuli associated with the probabilistic alternative. Experiment 2 tested some quantitative versions of this hypothesis by varying the delay for the probabilistic alternative (either 0 s or 2 s) and the probability of reinforcement (from .1 to 1.0). The results were best described by an equation that took into account both the cumulative durations of stimuli associated with the probabilistic reinforcer and the variability in these durations from one reinforcer to the next.  相似文献   

2.
An adjusting‐delay procedure was used to study the choices of pigeons and rats when both delay and amount of reinforcement were varied. In different conditions, the choice alternatives included one versus two reinforcers, one versus three reinforcers, and three versus two reinforcers. The delay to one alternative (the standard alternative) was kept constant in a condition, and the delay to the other (the adjusting alternative) was increased or decreased many times a session so as to estimate an indifference point—a delay at which the two alternatives were chosen about equally often. Indifference functions were constructed by plotting the adjusting delay as a function of the standard delay for each pair of reinforcer amounts. The experiments were designed to test the prediction of a hyperbolic decay equation that the slopes of the indifference functions should increase as the ratio of the two reinforcer amounts increased. Consistent with the hyperbolic equation, the slopes of the indifference functions depended on the ratios of the two reinforcer amounts for both pigeons and rats. These results were not compatible with an exponential decay equation, which predicts slopes of 1 regardless of the reinforcer amounts. Combined with other data, these findings provide further evidence that delay discounting is well described by a hyperbolic equation for both species, but not by an exponential equation. Quantitative differences in the y‐intercepts of the indifference functions from the two species suggested that the rate at which reinforcer strength decreases with increasing delay may be four or five times slower for rats than for pigeons.  相似文献   

3.
In Experiment 1 with rats, a left lever press led to a 5-s delay and then a possible reinforcer. A right lever press led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials to estimate an indifference point, or a delay at which the two alternatives were chosen about equally often. Indifference points increased as the probability of reinforcement for the left lever decreased. In some conditions with a 20% chance of food, a light above the left lever was lit during the 5-s delay on all trials, but in other conditions, the light was only lit on those trials that ended with food. Unlike previous results with pigeons, the presence or absence of the delay light on no-food trials had no effect on the rats' indifference points. In other conditions, the rats showed less preference for the 20% alternative when the time between trials was longer. In Experiment 2 with rats, fixed-interval schedules were used instead of simple delays, and the presence or absence of the fixed-interval requirement on no-food trials had no effect on the indifference points. In Experiment 3 with rats and Experiment 4 with pigeons, the animals chose between a fixed-ratio 8 schedule that led to food on 33% of the trials and an adjusting-ratio schedule with food on 100% of the trials. Surprisingly, the rats showed less preference for the 33% alternative in conditions in which the ratio requirement was omitted on no-food trials. For the pigeons, the presence or absence of the ratio requirement on no-food trials had little effect. The results suggest that there may be differences between rats and pigeons in how they respond in choice situations involving delayed and probabilistic reinforcers.  相似文献   

4.
Theories of probabilistic reinforcement.   总被引:9,自引:8,他引:1  
In three experiments, pigeons chose between two alternatives that differed in the probability of reinforcement and the delay to reinforcement. A peck at a red key led to a delay of 5 s and then a possible reinforcer. A peck at a green key led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials so as to estimate an indifference point, or a duration at which the two alternatives were chosen about equally often. In Experiments 1 and 2, the intertrial interval was varied across conditions, and these variations had no systematic effects on choice. In Experiment 3, the stimuli that followed a choice of the red key differed across conditions. In some conditions, a red houselight was presented for 5 s after each choice of the red key. In other conditions, the red houselight was present on reinforced trials but not on nonreinforced trials. Subjects exhibited greater preference for the red key in the latter case. The results were used to evaluate four different theories of probabilistic reinforcement. The results were most consistent with the view that the value or effectiveness of a probabilistic reinforcer is determined by the total time per reinforcer spent in the presence of stimuli associated with the probabilistic alternative. According to this view, probabilistic reinforcers are analogous to reinforcers that are delivered after variable delays.  相似文献   

5.
In a discrete-trials procedure with pigeons, a response on a green key led to a 4-s delay (during which green houselights were lit) and then a reinforcer might or might not be delivered. A response on a red key led to a delay of adjustable duration (during which red houselights were lit) and then a certain reinforcer. The delay was adjusted so as to estimate an indifference point--a duration for which the two alternatives were equally preferred. Once the green key was chosen, a subject had to continue to respond on the green key until a reinforcer was delivered. Each response on the green key, plus the 4-s delay that followed every response, was called one "link" of the green-key schedule. Subjects showed much greater preference for the green key when the number of links before reinforcement was variable (averaging four) than when it was fixed (always exactly four). These findings are consistent with the view that probabilistic reinforcers are analogous to reinforcers delivered after variable delays. When successive links were separated by 4-s or 8-s "interlink intervals" with white houselights, preference for the probabilistic alternative decreased somewhat for 2 subjects but was unaffected for the other 2 subjects. When the interlink intervals had the same green houselights that were present during the 4-s delays, preference for the green key decreased substantially for all subjects. These results provided mixed support for the view that preference for a probabilistic reinforcer is inversely related to the duration of conditioned reinforcers that precede the delivery of food.  相似文献   

6.
An adjusting-amount procedure was used to measure discounting of reinforcer value by delay. Eight rats chose between a varying amount of immediate water and a fixed amount of water given after a delay. The amount of immediate water was systematically adjusted as a function of the rats' previous choices. This procedure was used to determine the indifference point at which each rat chose the immediate amount and the delayed amount with equal frequency. The amount of immediate water at this indifference point was used to estimate the value of the delayed amount of water. In Experiment 1, the effects of daily changes in the delay to the fixed reinforcer (100 microliters of water delivered after 0, 2, 4, 8, or 16 s) were tested. Under these conditions, the rats reached indifference points within the first 30 trials of each 60-trial session. In Experiment 2, the effects of water deprivation level on discounting of value by delay were assessed. Altering water deprivation level affected the speed of responding but did not affect delay discounting. In Experiment 3, the effects of varying the magnitude of the delayed water (100, 150, and 200 microliters) were tested. There was some tendency for the discounting function to be steeper for larger than for smaller reinforcers, although this difference did not reach statistical significance. In all three experiments, the obtained discount functions were well described by a hyperbolic function. These experiments demonstrate that the adjusting-amount procedure provides a useful tool for measuring the discounting of reinforcer value by delay.  相似文献   

7.
This experiment tested for transitivity in pigeons' choices between variable-time (VT) and fixed-time (FT) schedules. In a discrete-trials procedure, a subject chose between two alternatives by making a single key peck. Each choice was between a "standard alternative," which was the same schedule throughout a condition, and an "adjusting alternative," in which the delay to reinforcement was systematically increased or decreased many times a session. These adjustments enabled an approximate indifference point to be identified--the value of the adjusting delay at which the subject chose each alternative about equally often. Each test of transitivity involved four conditions. In one, the standard alternative was a variable-time schedule with a 2-s reinforcer, and the adjusting alternative also delivered a 2-s reinforcer. A second condition was similar except that the adjusting alternative delivered a 5-s reinforcer. The indifference point from each of these conditions was then converted to a fixed-time schedule for subsequent comparisons in the third and fourth conditions, respectively. Each of these last two conditions compared one of the fixed-time schedules (based upon the previous conditions and including their different reinforcer durations) with an adjusting schedule that delivered the alternative reinforcer duration, to determine whether the obtained indifference points would be those predicted from the prior alternative-duration comparisons with the VT schedule. There was little evidence for intransitivity of choice: Averaged across subjects and replications, the obtained indifference points deviated from perfect transitivity by less than 8%, and these deviations were not statistically significant. These results contrast with those of Navarick and Fantino (1972), who found frequent violations of transitivity between periodic and aperiodic schedules using a concurrent-chains procedure with variable-interval schedules in the initial links.  相似文献   

8.
This experiment measured pigeons' choices between delayed reinforcers and fixed-ratio schedules in which a force of approximately 0.48 N was needed to operate the response key. In ratio-delay conditions, subjects chose between a fixed-ratio schedule and an adjusting delay. The delay was increased or decreased several times a session in order to estimate an indifference point--a delay duration at which the two alternatives were chosen about equally often. Each ratio-delay condition was followed by a delay-delay condition in which subjects chose between the adjusting delay and a variable-time schedule, with the components of this schedule selected to match the ratio completion times of the preceding ratio-delay condition. The adjusting delays at the indifference point were longer when the alternative was a fixed-ratio schedule than when it was a matched variable-time schedule, which indicated a preference for the matched variable-time schedules over the fixed-ratio schedules. This preference increased in a nonlinear manner with increasing ratio size. This nonlinearity was inconsistent with a theory that states that indifference points for both time and ratio schedules can be predicted by multiplying the choice response-reinforcer intervals of the two types of schedules by different multiplicative constants. Two other theories, which predict nonlinear increases in preference for the matched variable-time schedules, are discussed.  相似文献   

9.
Token reinforcement, choice, and self-control in pigeons.   总被引:9,自引:9,他引:0       下载免费PDF全文
Pigeons were exposed to self-control procedures that involved illumination of light-emitting diodes (LEDs) as a form of token reinforcement. In a discrete-trials arrangement, subjects chose between one and three LEDs; each LED was exchangeable for 2-s access to food during distinct posttrial exchange periods. In Experiment 1, subjects generally preferred the immediate presentation of a single LED over the delayed presentation of three LEDs, but differences in the delay to the exchange period between the two options prevented a clear assessment of the relative influence of LED delay and exchange-period delay as determinants of choice. In Experiment 2, in which delays to the exchange period from either alternative were equal in most conditions, all subjects preferred the delayed three LEDs more often than in Experiment-1. In Experiment 3, subjects preferred the option that resulted in a greater amount of food more often if the choices also produced LEDs than if they did not. In Experiment 4, preference for the delayed three LEDs was obtained when delays to the exchange period were equal, but reversed in favor of an immediate single LED when the latter choice also resulted in quicker access to exchange periods. The overall pattern of results suggests that (a) delay to the exchange period is a more critical determinant of choice than is delay to token presentation; (b) tokens may function as conditioned reinforcers, although their discriminative properties may be responsible for the self-control that occurs under token reinforcer arrangements; and (c) previously reported differences in the self-control choices of humans and pigeons may have resulted at least in part from the procedural conventions of using token reinforcers with human subjects and food reinforcers with pigeon subjects.  相似文献   

10.
Effects of intertrial reinforcers on self-control choice.   总被引:1,自引:1,他引:0       下载免费PDF全文
In three experiments, pigeons chose between a small amount of food delivered after a short delay and a larger amount delivered after a longer delay. A discrete-trial adjusting-delay procedure was used to estimate indifference points--pairs of delay-amount combinations that were chosen about equally often. In Experiment 1, when additional reinforcers were available during intertrial intervals on a variable-interval schedule, preference for the smaller, more immediate reinforcer increased. Experiment 2 found that this shift in preference occurred partly because the variable-interval schedule started sooner after the smaller, more immediate reinforcer, but there was still a small shift in preference when the durations and temporal locations of the variable-interval schedules were identical for both alternatives. Experiment 3 found greater increases in preference for the smaller, more immediate reinforcer with a variable-interval 15-s schedule than with a variable-interval 90-s schedule. The results were generally consistent with a model that states that the impact of any event that follows a choice response declines according to a hyperbolic function with increasing time since the moment of choice.  相似文献   

11.
Rats were exposed to concurrent-chains schedules in which a single variable-interval schedule arranged entry into one of two terminal-link delay periods (fixed-interval schedules). The shorter delay ended with the delivery of a single food pellet; the longer day ended with a larger number of food pellets (two under some conditions and six under others). In Experiment 1, the terminal-link delays were selected so that under all conditions the ratio of delays would exactly equal the ratio of the number of pellets. But the absolute duration of the delays differed across conditions. In one condition, for example, rats chose between one pellet delayed 5 s and six pellets delayed 30 s; in another condition rats chose between one pellet delayed 10 s and six pellets delayed 60 s. The generalized matching law predicts indifference between the two alternatives, assuming that the sensitivity parameters for amount and delay of reinforcement are equal. The rats' choices were, in fact, close to indifference except when the choice was between one pellet delayed 5 s and six pellets delayed 30 s. That deviation from indifference suggests that the sensitivities to amount and delay differ from each other depending on the durations of the delays. In Experiment 2, rats chose between one pellet following a 5-s delay and six pellets following a delay that was systematically increased over sessions to find a point of indifference. Indifference was achieved when the delay to the six pellets was approximately 55 s. These results are consistent with the possibility that the relative sensitivities to amount and delay differ as a function of the delays.  相似文献   

12.
To study how effort affects reward value, we replicated Fortes, Vasconcelos and Machado's (2015) study using an adjusting‐delay task. Nine pigeons chose between a standard alternative that gave access to 4 s of food, after a 10 s delay, and an adjusting‐delay alternative that gave access to 12 s of food after a delay that changed dynamically with the pigeons' choices, decreasing when they preferred the standard alternative, and increasing when they preferred the adjusting alternative. The delay value at which preference stabilized defined the indifference point, a measure of reward value. To manipulate effort across phases, we varied the response rate required during the delay of the standard alternative. Results showed that a) the indifference point increased in the higher‐response‐rate phases, suggesting that reward value decreased with effort, and b) in the higher‐response‐rate phases, response rate in the standard alternative was linearly related to the indifference point. We advance several conceptions of how effort may change perceived delay or amount and thereby affect reward value.  相似文献   

13.
In an adjusting-delay choice procedure, pigeons could peck on either a red key or a green key. A peck on the red key always led to a delay associated with red houselights and then food. The delay was adjusted over trials to estimate an indifference point--a delay at which the two keys were chosen about equally often. In some conditions, a peck on the green key led to food on all trials after delays of either 10 s or 30 s, and green houselights were lit during the delays. In other conditions, food was presented on only half of the green-key trials. If the green houselights continued to occur on both reinforcement and nonreinforcement trials, preference for the green key always decreased. Preference for the green key also decreased if half of the trials had 30-s houselights followed by food and the other half had no green houselights and no food. However, preference for the green key actually increased if half of the trials had 10-s green houselights followed by food and the other half had no green houselights followed by no food. The latter condition therefore demonstrated a case in which preference for an alternative increased when food was removed from half of the trials. The results suggest that the red and green houselights served as conditioned reinforcers. A hyperbolic decay model (Mazur, 1989) provided good predictions for all conditions by assuming that the strength of a conditioned reinforcer is inversely related to the total time spent in its presence before food is delivered.  相似文献   

14.
The ability to compute probability, previously shown in nonverbal infants, apes, and monkeys, was examined in three experiments with pigeons. After responding to individually presented keys in an operant chamber that delivered reinforcement with varying probabilities, pigeons chose between these keys on probe trials. Pigeons strongly preferred a 75% reinforced key over a 25% reinforced key, even when the total number of reinforcers obtained on each key was equated. When both keys delivered 50% reinforcement, pigeons showed indifference between them, even though three times more reinforcers were obtained on one key than on the other. It is suggested that computation of probability may be common to many classes of animals and may be driven by the need to forage successfully for nutritional food items, mates, and areas with a low density of predators.  相似文献   

15.
Parallel experiments with rats and pigeons examined whether the size of a pre-trial ratio requirement would affect choices in a self-control situation. In different conditions, either 1 response or 40 responses were required before each trial. In the first half of each experiment, an adjusting-ratio schedule was used, in which subjects could choose a fixed-ratio schedule leading to a small reinforcer, or an adjusting-ratio schedule leading to a larger reinforcer. The size of the adjusting ratio requirement was increased and decreased over trials based on the subject's responses, in order to estimate an indifference point-a ratio at which the two alternatives were chosen about equally often. The second half of each experiment used an adjusting-delay procedure-fixed and adjusting delays to the small and large reinforcers were used instead of ratio requirements. In some conditions, particularly with the reinforcer delays, the rats had consistently longer adjusting delays with the larger pre-trial ratios, reflecting a greater tendency to choose the larger, delayed reinforcer when more responding was required to reach the choice point. No consistent effects of the pre-trial ratio were found for the pigeons in any of the conditions. These results may indicate that rats are more sensitive to the long-term reinforcement rates of the two alternatives, or they may result from a shallower temporal discounting rate for rats than for pigeons, a difference that has been observed in previous studies.  相似文献   

16.
Humans discount larger delayed rewards less steeply than smaller rewards, whereas no such magnitude effect has been observed in rats (and pigeons). It remains possible that rats' discounting is sensitive to differences in the quality of the delayed reinforcer even though it is not sensitive to amount. To evaluate this possibility, Experiment 1 examined discounting of qualitatively different food reinforcers: highly preferred versus nonpreferred food pellets. Similarly, Experiment 2 examined discounting of highly preferred versus nonpreferred liquid reinforcers. In both experiments, an adjusting-amount procedure was used to determine the amount of immediate reinforcer that was judged to be of equal subjective value to the delayed reinforcer. The amount and quality of the delayed reinforcer were varied across conditions. Discounting was well described by a hyperbolic function, but no systematic effects of the quantity or the quality of the delayed reinforcer were observed.  相似文献   

17.
Pigeons' discounting of probabilistic and delayed food reinforcers was studied using adjusting-amount procedures. In the probability discounting conditions, pigeons chose between an adjusting number of food pellets contingent on a single key peck and a larger, fixed number of pellets contingent on completion of a variable-ratio schedule. In the delay discounting conditions, pigeons chose between an adjusting number of pellets delivered immediately and a larger, fixed number of pellets delivered after a delay. Probability discounting (i.e., subjective value as a function of the odds against reinforcement) was as well described by a hyperboloid function as delay discounting was (i.e., subjective value as a function of the time until reinforcement). As in humans, the exponents of the hyperboloid function when it was fitted to the probability discounting data were lower than the exponents of the hyperboloid function when it was fitted to the delay discounting data. The subjective values of probabilistic reinforcers were strongly correlated with predictions based on simply substituting the average delay to their receipt in each probabilistic reinforcement condition into the hyperboloid discounting function. However, the subjective values were systematically underestimated using this approach. Using the discounting function proposed by Mazur (1989), which takes into account the variability in the delay to the probabilistic reinforcers, the accuracy with which their subjective values could be predicted was increased. Taken together, the present findings are consistent with Rachlin's (Rachlin, 1990; Rachlin, Logue, Gibbon, & Frankel, 1986) hypothesis that choice involving repeated gambles may be interpreted in terms of the delays to the probabilistic reinforcers.  相似文献   

18.
Two experiments studied the phenomenon of procrastination, in which pigeons chose a larger, more delayed response requirement over a smaller, more immediate response requirement. The response requirements were fixed-interval schedules that did not lead to an immediate food reinforcer, but that interrupted a 55-s period in which food was delivered at random times. The experiments used an adjusting-delay procedure in which the delay to the start of one fixed-interval requirement was varied over trials to estimate an indifference point--a delay at which the two alternatives were chosen about equally often. Experiment 1 found that as the delay to a shorter fixed-interval requirement was increased, the adjusting delay to a longer fixed-interval requirement also increased, and the rate of increase depended on the duration of the longer fixed-interval requirement. Experiment 2 found a strong preference for a fixed delay of 10 s to the start of a fixed-interval requirement compared to a mixed delay of either 0 or 20 s. The results help to distinguish among different equations that might describe the decreasing effectiveness of a response requirement with increasing delay, and they suggest that delayed reinforcers and delayed response requirements have symmetrical but opposite effects on choice.  相似文献   

19.
Pigeon and human subjects were given repeated choices between variable and adjusting delays to token reinforcement that titrated in relation to a subject's recent choice patterns. Indifference curves were generated under two different procedures: immediate exchange, in which a token earned during each trial was exchanged immediately for access to the terminal reinforcer (food for pigeons, video clips for humans), and delayed exchange, in which tokens accumulated and were exchanged after 11 trials. The former was designed as an analogue of procedures typically used with nonhuman subjects, the latter as an analogue to procedures typically used with human participants. Under both procedure types, different variable‐delay schedules were manipulated systematically across conditions in ways that altered the reinforcer immediacy of the risky option. Under immediate‐exchange conditions, both humans and pigeons consistently preferred the variable delay, and indifference points were generally ordered in relation to relative reinforcer immediacies. Such risk sensitivity was greatly reduced under delayed‐exchange conditions. Choice and trial‐initiation response latencies varied directly with indifference points, suggesting that local analyses may provide useful ancillary measures of reinforcer value. On the whole, the results indicate that modifying procedural features brings choices of pigeons and humans into better accord, and that human—nonhuman differences on risky choice procedures reported in the literature may be at least partly a product of procedural differences.  相似文献   

20.
Influences of delay and rate of reinforcement on discrete-trial choice   总被引:4,自引:0,他引:4  
An adjusting procedure was used to measure pigeons' preferences among alternatives that differed in the duration of a delay before reinforcement and of an intertrial interval (ITI) after reinforcement. In most conditions, a peck at a red key led to a fixed delay, followed by reinforcement, a fixed ITI, and then the beginning of the next trial. A peck at a green key led to an adjustable delay, reinforcement, and then the next trial began without an ITI. The purpose of the adjusting delay was to estimate an indifference point, or a delay that made a subject approximately indifferent between the two alternatives. As the ITI for the red key increased from 0 s to 60 s, the green-key delay at the indifference point increased systematically but only slightly. The fact that there was some increase showed that pigeons' choices were controlled by more than simply the delay to the next reinforcer. One interpretation of these results is that besides delay of reinforcement, rate of reinforcement also influenced choice. However, an analysis that ignored reinforcement rate, but considered the delays between a choice response and the reinforcers on subsequent trials, was able to account for most of the obtained increases in green-key delays. It was concluded that in this type of discrete-trial situation, rate of reinforcement exerts little control over choice behavior, and perhaps none at all.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号