首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Effects of intertrial reinforcers on self-control choice.   总被引:1,自引:1,他引:0       下载免费PDF全文
In three experiments, pigeons chose between a small amount of food delivered after a short delay and a larger amount delivered after a longer delay. A discrete-trial adjusting-delay procedure was used to estimate indifference points--pairs of delay-amount combinations that were chosen about equally often. In Experiment 1, when additional reinforcers were available during intertrial intervals on a variable-interval schedule, preference for the smaller, more immediate reinforcer increased. Experiment 2 found that this shift in preference occurred partly because the variable-interval schedule started sooner after the smaller, more immediate reinforcer, but there was still a small shift in preference when the durations and temporal locations of the variable-interval schedules were identical for both alternatives. Experiment 3 found greater increases in preference for the smaller, more immediate reinforcer with a variable-interval 15-s schedule than with a variable-interval 90-s schedule. The results were generally consistent with a model that states that the impact of any event that follows a choice response declines according to a hyperbolic function with increasing time since the moment of choice.  相似文献   

2.
Human subjects were exposed to a concurrent-chains schedule in which reinforcer amounts, delays, or both were varied in the terminal links, and consummatory responses were required to receive points that were later exchangeable for money. Two independent variable-interval 30-s schedules were in effect during the initial links, and delay periods were defined by fixed-time schedules. In Experiment 1, subjects were exposed to three different pairs of reinforcer amounts and delays, and sensitivity to reinforcer amount and delay was determined based on the generalized matching law. The relative responding (choice) of most subjects was more sensitive to reinforcer amount than to reinforcer delay. In Experiment 2, subjects chose between immediate smaller reinforcers and delayed larger reinforcers in five conditions with and without timeout periods that followed a shorter delay, in which reinforcer amounts and delays were combined to make different predictions based on local reinforcement density (i.e., points per delay) or overall reinforcement density (i.e., points per total time). In most conditions, subjects' choices were qualitatively in accord with the predictions from the overall reinforcement density calculated by the ratio of reinforcer amount and total time. Therefore, the overall reinforcement density appears to influence the preference of humans in the present self-control choice situation.  相似文献   

3.
Sensitivity to reinforcer duration in a self-control procedure   总被引:2,自引:2,他引:0  
In a concurrent-chains procedure, pigeons' responses on left and right keys were followed by reinforcers of different durations at different delays following the choice responses. Three pairs of reinforcer delays were arranged in each session, and reinforcer durations were varied over conditions. In Experiment 1 reinforcer delays were unequal, and in Experiment 2 reinforcer delays were equal. In Experiment 1 preference reversal was demonstrated in that an immediate short reinforcer was chosen more frequently than a longer reinforcer delayed 6 s from the choice, whereas the longer reinforcer was chosen more frequently when delays to both reinforcers were lengthened. In both experiments, choice responding was more sensitive to variations in reinforcer duration at overall longer reinforcer delays than at overall shorter reinforcer delays, independently of whether fixed-interval or variable-interval schedules were arranged in the choice phase. We concluded that preference reversal results from a change in sensitivity of choice responding to ratios of reinforcer duration as the delays to both reinforcers are lengthened.  相似文献   

4.
Previous quantitative models of choice in a self-control paradigm (choice between a larger, more-delayed reinforcer and a smaller, less-delayed reinforcer) have not described individual differences. Two experiments are reported that provide additional quantitative data on experience-based differences in choice between reinforcers of varying sizes and delays. In Experiment 1, seven pigeons in a self-control paradigm were exposed to a fading procedure that increased choices of the larger, more-delayed reinforcer through gradually decreasing the delay to the smaller of two equally delayed reinforcers. Three control subjects, exposed to each of the small-reinforcer delays to which the experimental subjects were exposed, but for fewer sessions, demonstrated that lengthy exposure to each of the conditions in the fading procedure may be necessary in order for the increase to occur. In Experiment 2, pigeons with and without fading-procedure exposure chose between reinforcers of varying sizes and delays scheduled according to a concurrent variable-interval variable-interval schedule. In both experiments, pigeons with fading-procedure exposure were more sensitive to variations in reinforcer size than reinforcer delay when compared with pigeons without this exposure. The data were described by the generalized matching law when the relative size of its exponents, representing subjects' relative sensitivity to reinforcer size and delay, were grouped according to subjects' experience.  相似文献   

5.
We investigated the effects that sequences of reinforcers obtained from the same response key have on local preference in concurrent variable-interval schedules with pigeons as subjects. With an overall reinforcer rate of one every 27 s, on average, reinforcers were scheduled dependently, and the probability that a reinforcer would be arranged on the same alternative as the previous reinforcer was manipulated. Throughout the experiment, the overall reinforcer ratio was 1:1, but across conditions we varied the average lengths of same-key reinforcer sequences by varying this conditional probability from 0 to 1. Thus, in some conditions, reinforcer locations changed frequently, whereas in others there tended to be very long sequences of same-key reinforcers. Although there was a general tendency to stay at the just-reinforced alternative, this tendency was considerably decreased in conditions where same-key reinforcer sequences were short. Some effects of reinforcers are at least partly to be accounted for by their signaling subsequent reinforcer locations.  相似文献   

6.
Two experiments with human subjects investigated the effects of rate of reinforcement and reinforcer magnitude upon choice. In Experiment 1, each of five subjects responded on four concurrent variable-interval schedules. In contrast to previous studies using non-human organisms, relative response rate did not closely match relative rate of reinforcement. Discrepancies ranged from 0.03 to 0.43 (mean equal to 0.19). Similar discrepancies were found between relative amount of time spent responding on each schedule and the corresponding relative rates of reinforcement. In Experiment 2, in which reinforcer magnitude was varied for each of five subjects, similar discrepancies ranging from 0.05 to 0.50 (mean equal to 0.21), were found between relative response rate and relative proportion of reinforcers received. In both experiments, changeover rates were lower on the long-interval concurrent schedules than on the short-interval ones. The results suggest that simple application of previous generalizations regarding the effects of reinforcement rate and reinforcer magnitude on choice for variable-interval schedules does not accurately describe human behavior in a simple laboratory situation.  相似文献   

7.
Token reinforcement, choice, and self-control in pigeons.   总被引:9,自引:9,他引:0       下载免费PDF全文
Pigeons were exposed to self-control procedures that involved illumination of light-emitting diodes (LEDs) as a form of token reinforcement. In a discrete-trials arrangement, subjects chose between one and three LEDs; each LED was exchangeable for 2-s access to food during distinct posttrial exchange periods. In Experiment 1, subjects generally preferred the immediate presentation of a single LED over the delayed presentation of three LEDs, but differences in the delay to the exchange period between the two options prevented a clear assessment of the relative influence of LED delay and exchange-period delay as determinants of choice. In Experiment 2, in which delays to the exchange period from either alternative were equal in most conditions, all subjects preferred the delayed three LEDs more often than in Experiment-1. In Experiment 3, subjects preferred the option that resulted in a greater amount of food more often if the choices also produced LEDs than if they did not. In Experiment 4, preference for the delayed three LEDs was obtained when delays to the exchange period were equal, but reversed in favor of an immediate single LED when the latter choice also resulted in quicker access to exchange periods. The overall pattern of results suggests that (a) delay to the exchange period is a more critical determinant of choice than is delay to token presentation; (b) tokens may function as conditioned reinforcers, although their discriminative properties may be responsible for the self-control that occurs under token reinforcer arrangements; and (c) previously reported differences in the self-control choices of humans and pigeons may have resulted at least in part from the procedural conventions of using token reinforcers with human subjects and food reinforcers with pigeon subjects.  相似文献   

8.
An adjusting‐delay procedure was used to study the choices of pigeons and rats when both delay and amount of reinforcement were varied. In different conditions, the choice alternatives included one versus two reinforcers, one versus three reinforcers, and three versus two reinforcers. The delay to one alternative (the standard alternative) was kept constant in a condition, and the delay to the other (the adjusting alternative) was increased or decreased many times a session so as to estimate an indifference point—a delay at which the two alternatives were chosen about equally often. Indifference functions were constructed by plotting the adjusting delay as a function of the standard delay for each pair of reinforcer amounts. The experiments were designed to test the prediction of a hyperbolic decay equation that the slopes of the indifference functions should increase as the ratio of the two reinforcer amounts increased. Consistent with the hyperbolic equation, the slopes of the indifference functions depended on the ratios of the two reinforcer amounts for both pigeons and rats. These results were not compatible with an exponential decay equation, which predicts slopes of 1 regardless of the reinforcer amounts. Combined with other data, these findings provide further evidence that delay discounting is well described by a hyperbolic equation for both species, but not by an exponential equation. Quantitative differences in the y‐intercepts of the indifference functions from the two species suggested that the rate at which reinforcer strength decreases with increasing delay may be four or five times slower for rats than for pigeons.  相似文献   

9.
Eight pigeons pecked keys under multiple variable-interval two-minute variable-interval two-minute schedules. In Experiment 1, the reinforcers were 2, 4, or 8 seconds access to a food magazine. In Experiments 2 and 3, the reinforcers were grains that had been determined to be most-, moderately-, or non-preferred. Both positive and negative behavioral contrast occurred when the reinforcers in one component were held constant and the duration or type of reinforcer obtained in the other component varied. Undermatching occurred when the relative rate of responding during a component was plotted as a function of the relative duration of the reinforcers in that component.  相似文献   

10.
Rats obtained food-pellet reinforcers by nose poking a lighted key. Experiment 1 examined resistance to extinction following single-schedule training with different variable-interval schedules, ranging from a mean interval of 16 min to 0.25 min. That is, for each schedule, the rats received 20 consecutive daily baseline sessions and then a session of extinction (i.e., no reinforcers). Resistance to extinction (decline in response rate relative to baseline) was negatively related to the rate of reinforcers obtained during baseline, a relation analogous to the partial-reinforcement-extinction effect. A positive relation between these variables emerged, however, when the unit of extinction was taken as the mean interreinforcer interval that had been in effect during training (i.e., as an omitted reinforcer during extinction). In a second experiment, rats received blocks of training sessions, all with the same variable-interval schedule but with a reinforcer of four pellets for some blocks and one pellet for others. Resistance to extinction was greater following training with the larger (four pellets) than with the smaller (one pellet) reinforcer. Taken together, these results support the principle that greater reinforcement during training (e.g., higher rate or larger amount) engenders greater resistance to extinction even when the different conditions of reinforcement are varied between blocks of sessions.  相似文献   

11.
Six pigeons were trained to respond on two keys, each of which provided reinforcers on an arithmetic variable-interval schedule. These concurrent schedules ran nonindependently with a 2-s changeover delay. Six sets of conditions were conducted. Within each set of conditions the ratio of reinforcers available on the two alternatives was varied, but the arranged overall reinforcer rate remained constant. Each set of conditions used a different overall reinforcer rate, ranging from 0.22 reinforcers per minute to 10 reinforcers per minute. The generalized matching law fit the data from each set of conditions, but sensitivity to reinforcer frequency (a) decreased as the overall reinforcer rate decreased for both time allocation and response allocation based analyses of the data. Overall response rates did not vary with changes in relative reinforcer rate, but decreased with decreases in overall reinforcer rate. Changeover rates varied as a function of both relative and overall reinforcer rates. However, as explanations based on changeover rate seem unable to deal with the changes in generalized matching sensitivity, discrimination accounts of choice may offer a more promising interpretation.  相似文献   

12.
Humans discount larger delayed rewards less steeply than smaller rewards, whereas no such magnitude effect has been observed in rats (and pigeons). It remains possible that rats' discounting is sensitive to differences in the quality of the delayed reinforcer even though it is not sensitive to amount. To evaluate this possibility, Experiment 1 examined discounting of qualitatively different food reinforcers: highly preferred versus nonpreferred food pellets. Similarly, Experiment 2 examined discounting of highly preferred versus nonpreferred liquid reinforcers. In both experiments, an adjusting-amount procedure was used to determine the amount of immediate reinforcer that was judged to be of equal subjective value to the delayed reinforcer. The amount and quality of the delayed reinforcer were varied across conditions. Discounting was well described by a hyperbolic function, but no systematic effects of the quantity or the quality of the delayed reinforcer were observed.  相似文献   

13.
Two experiments measured pigeons' choices between probabilistic reinforcers and certain but delayed reinforcers. In Experiment 1, a peck on a red key led to a 5-s delay and then a possible reinforcer (with a probability of .2). A peck on a green key led to a certain reinforcer after an adjusting delay. This delay was adjusted over trials so as to estimate an indifference point, or a duration at which the two alternatives were chosen about equally often. In all conditions, red houselights were present during the 5-s delay on reinforced trials with the probabilistic alternative, but the houselight colors on nonreinforced trials differed across conditions. Subjects showed a stronger preference for the probabilistic alternative when the houselights were a different color (white or blue) during the delay on nonreinforced trials than when they were red on both reinforced and nonreinforced trials. These results supported the hypothesis that the value or effectiveness of a probabilistic reinforcer is inversely related to the cumulative time per reinforcer spent in the presence of stimuli associated with the probabilistic alternative. Experiment 2 tested some quantitative versions of this hypothesis by varying the delay for the probabilistic alternative (either 0 s or 2 s) and the probability of reinforcement (from .1 to 1.0). The results were best described by an equation that took into account both the cumulative durations of stimuli associated with the probabilistic reinforcer and the variability in these durations from one reinforcer to the next.  相似文献   

14.
Theories of probabilistic reinforcement.   总被引:9,自引:8,他引:1  
In three experiments, pigeons chose between two alternatives that differed in the probability of reinforcement and the delay to reinforcement. A peck at a red key led to a delay of 5 s and then a possible reinforcer. A peck at a green key led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials so as to estimate an indifference point, or a duration at which the two alternatives were chosen about equally often. In Experiments 1 and 2, the intertrial interval was varied across conditions, and these variations had no systematic effects on choice. In Experiment 3, the stimuli that followed a choice of the red key differed across conditions. In some conditions, a red houselight was presented for 5 s after each choice of the red key. In other conditions, the red houselight was present on reinforced trials but not on nonreinforced trials. Subjects exhibited greater preference for the red key in the latter case. The results were used to evaluate four different theories of probabilistic reinforcement. The results were most consistent with the view that the value or effectiveness of a probabilistic reinforcer is determined by the total time per reinforcer spent in the presence of stimuli associated with the probabilistic alternative. According to this view, probabilistic reinforcers are analogous to reinforcers that are delivered after variable delays.  相似文献   

15.
16.
Six pigeons were trained with a chain variable-interval variable-interval schedule on the left key and with reinforcers available on the right key on a single variable-interval schedule arranged concurrently with both links of the chain. All three schedules were separately and systematically varied over a wide range of mean intervals. During these manipulations, the obtained reinforcer rates on constant arranged schedules also frequently changed systematically. Increasing reinforcer rates in Link 2 of the chain increased response rates in both links and decreased response rates in the variable-interval schedule concurrently available with Link 2. Increasing Link-1 reinforcer rates increased Link-1 response rates and decreased Link-2 response rates. Increasing reinforcer rates on the right-key schedule decreased response rates in Link 1 of the chain but did not affect the rate in Link 2. The results extend and amplify previous analyses of chain-schedule performance and help define the effects that a quantitative model must describe. However, the complexity of the results, and the fact that constant arranged reinforcer schedules did not necessarily lead to constant obtained reinforcer rates, precluded a quantitative analysis.  相似文献   

17.
An adjusting-amount procedure was used to measure discounting of reinforcer value by delay. Eight rats chose between a varying amount of immediate water and a fixed amount of water given after a delay. The amount of immediate water was systematically adjusted as a function of the rats' previous choices. This procedure was used to determine the indifference point at which each rat chose the immediate amount and the delayed amount with equal frequency. The amount of immediate water at this indifference point was used to estimate the value of the delayed amount of water. In Experiment 1, the effects of daily changes in the delay to the fixed reinforcer (100 microliters of water delivered after 0, 2, 4, 8, or 16 s) were tested. Under these conditions, the rats reached indifference points within the first 30 trials of each 60-trial session. In Experiment 2, the effects of water deprivation level on discounting of value by delay were assessed. Altering water deprivation level affected the speed of responding but did not affect delay discounting. In Experiment 3, the effects of varying the magnitude of the delayed water (100, 150, and 200 microliters) were tested. There was some tendency for the discounting function to be steeper for larger than for smaller reinforcers, although this difference did not reach statistical significance. In all three experiments, the obtained discount functions were well described by a hyperbolic function. These experiments demonstrate that the adjusting-amount procedure provides a useful tool for measuring the discounting of reinforcer value by delay.  相似文献   

18.
We conducted two studies extending basic matching research on self-control and impulsivity to the investigation of choices of students diagnosed as seriously emotionally disturbed. In Study 1 we examined the interaction between unequal rates of reinforcement and equal versus unequal delays to reinforcer access on performance of concurrently available sets of math problems. The results of a reversal design showed that when delays to reinforcer access were the same for both response alternatives, the time allocated to each was approximately proportional to obtained reinforcement. When the delays to reinforcer access differed between the response alternatives, there was a bias toward the response alternative and schedule with the lower delays, suggesting impulsivity (i.e., immediate reinforcer access overrode the effects of rate of reinforcement). In Study 2 we examined the interactive effects of reinforcer rate, quality, and delay. Conditions involving delayed access to the high-quality reinforcers on the rich schedule (with immediate access to low-quality reinforcers earned on the lean schedule) were alternated with immediate access to low-quality reinforcers on the rich schedule (with delayed access to high-quality reinforcers on the lean schedule) using a reversal design. With 1 student, reinforcer quality overrode the effects of both reinforcer rate and delay to reinforcer access. The other student tended to respond exclusively to the alternative associated with immediate access to reinforcers. The studies demonstrate a methodology based on matching theory for determining influential dimensions of reinforcers governing individuals' choices.  相似文献   

19.
Our research addressed the question of whether sensitivity to relative reinforcer magnitude in concurrent chains depends on the distribution of reinforcer delays when the terminal-link schedules are equal. In Experiment 1, 12 pigeons responded in a two-component procedure. In both components, the initial links were concurrent variable-interval 40-s variable-interval 40-s, and the terminal links were both 20-s interval schedules in which responses were reinforced by either 4-s of grain in one, or 2-s of grain in the other. The only difference between the components was whether the terminal-link schedules were fixed interval or variable intervals. For all subjects, the relative rate of responding in the initial links for the terminal link that produced the 4-s reinforcer was greater when the terminal links were fixed-interval schedules than when they were variable-interval schedules. This result is contrary to the prediction of Grace's (1994) contextual choice model, but is consistent with both Mazur's (2001) hyperbolic value-added model and Killeen's (1985) incentive theory. In Experiment 2, 4 pigeons responded in a concurrent-chains procedure in which 4-s or 2-s reinforcers were provided independently of responding according to equal fixed-time or mixed-time schedules. Preference for the 4-s reinforcer increased as the variability of the intervals comprising the mixed-time schedules was decreased. Generalized-matching sensitivity of initial-link response allocation to relative reinforcer magnitude was proportional to the geometric mean of the terminal-link delays.  相似文献   

20.
Students with learning difficulties participated in two studies that analyzed the effects of problem difficulty and reinforcer quality upon time allocated to two sets of arithmetic problems reinforced according to a concurrent variable-interval 30-s variable-interval 120-s schedule. In Study 1, high- and low-difficulty arithmetic problems were systematically combined with rich and lean concurrent schedules (nickels used as reinforcers) across conditions using a single-subject design. The pairing of the high-difficulty problems with the richer schedule failed to offset time allocated to that alternative. Study 2 investigated the interactive effects of problem difficulty and reinforcer quality (nickels vs. program money) upon time allocation to arithmetic problems maintained by the concurrent schedules of reinforcement. Unlike problem difficulty, the pairing of the lesser quality reinforcer (program money) with the richer schedule reduced the time allocated to that alternative. The magnitude of this effect was greatest when combined with the low-difficulty problems. These studies have important implications for a matching law analysis of asymmetrical reinforcement variables that influence time allocation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号