首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 96 毫秒
1.
Studies have demonstrated that rats will increase their operant rate of response for a low-valued reinforcer if a high-valued reinforcer will be available later in the session. Research on this positive induction effect suggests that at least three factors account for its appearance: premature responding for the yet unavailable high-valued reinforcer, an increase in the reinforcing value of the low-valued reinforcer, and responding controlled (e.g., elicited) by the response manipulandum. The present experiment tested whether the size of induction could be systematically altered by varying these factors. Twenty-four rats responded in sessions in which 1% sucrose or a food pellet served as the reinforcer in the first or second half of the session. In some sessions, the same operant response was required in both halves of the session. In others, different responses were required. Half of the rats received the different reinforcers in one food trough while the other half received reinforcers in the different halves of the session in different food troughs. Results demonstrated that a large positive induction effect was observed when all of the above factors were present to contribute to the effect (i.e., high-valued reinforcer upcoming, earned by making the same response, delivered to the same food trough). A small, but significant, induction effect remained when all three were absent (i.e., high-valued reinforcer delivered first, earned by making a different response, delivered to a different food trough). The results support the idea that these three factors are the main contributors to the appearance of this positive induction effect. However, at least one additional factor must also contribute.  相似文献   

2.
Sensitivity to reinforcer duration in a self-control procedure   总被引:2,自引:2,他引:0  
In a concurrent-chains procedure, pigeons' responses on left and right keys were followed by reinforcers of different durations at different delays following the choice responses. Three pairs of reinforcer delays were arranged in each session, and reinforcer durations were varied over conditions. In Experiment 1 reinforcer delays were unequal, and in Experiment 2 reinforcer delays were equal. In Experiment 1 preference reversal was demonstrated in that an immediate short reinforcer was chosen more frequently than a longer reinforcer delayed 6 s from the choice, whereas the longer reinforcer was chosen more frequently when delays to both reinforcers were lengthened. In both experiments, choice responding was more sensitive to variations in reinforcer duration at overall longer reinforcer delays than at overall shorter reinforcer delays, independently of whether fixed-interval or variable-interval schedules were arranged in the choice phase. We concluded that preference reversal results from a change in sensitivity of choice responding to ratios of reinforcer duration as the delays to both reinforcers are lengthened.  相似文献   

3.
In the present experiment, the authors investigated the idea that within-session changes in operant response rates occur because subjects sensitize and then habituate to the reinforcer. If that is true, then altering an aspect of the reinforcer within the session should alter the observed within-session responding. The authors tested that idea by having rats press a lever for 2 food-pellet reinforcers delivered by a variable-interval 120-s schedule during 60-min baseline sessions. In treatment conditions, the magnitude of the reinforcer was halved (1 pellet) or doubled (4 pellets) 10, 20, 30, 40, or 50 min into the session. That magnitude of reinforcement then remained in effect for the rest of the session. Altering reinforcer magnitude altered the rates of responding within the session in a fashion consistent with the habituation explanation, that is, response rates increased, relative to baseline, when the magnitude of reinforcement was increased. They decreased when the magnitude was decreased. Those results were seemingly inconsistent with the competing idea that within-session decreases in responding rates are produced by satiation.  相似文献   

4.
An adjusting-amount procedure was used to measure discounting of reinforcer value by delay. Eight rats chose between a varying amount of immediate water and a fixed amount of water given after a delay. The amount of immediate water was systematically adjusted as a function of the rats' previous choices. This procedure was used to determine the indifference point at which each rat chose the immediate amount and the delayed amount with equal frequency. The amount of immediate water at this indifference point was used to estimate the value of the delayed amount of water. In Experiment 1, the effects of daily changes in the delay to the fixed reinforcer (100 microliters of water delivered after 0, 2, 4, 8, or 16 s) were tested. Under these conditions, the rats reached indifference points within the first 30 trials of each 60-trial session. In Experiment 2, the effects of water deprivation level on discounting of value by delay were assessed. Altering water deprivation level affected the speed of responding but did not affect delay discounting. In Experiment 3, the effects of varying the magnitude of the delayed water (100, 150, and 200 microliters) were tested. There was some tendency for the discounting function to be steeper for larger than for smaller reinforcers, although this difference did not reach statistical significance. In all three experiments, the obtained discount functions were well described by a hyperbolic function. These experiments demonstrate that the adjusting-amount procedure provides a useful tool for measuring the discounting of reinforcer value by delay.  相似文献   

5.
Many drugs of abuse produce changes in impulsive choice, that is, choice for a smaller—sooner reinforcer over a larger—later reinforcer. Because the alternatives differ in both delay and amount, it is not clear whether these drug effects are due to the differences in reinforcer delay or amount. To isolate the effects of delay, we used a titrating delay procedure. In phase 1, 9 rats made discrete choices between variable delays (1 or 19 s, equal probability of each) and a delay to a single food pellet. The computer titrated the delay to a single food pellet until the rats were indifferent between the two options. This indifference delay was used as the starting value for the titrating delay for all future sessions. We next evaluated the acute effects of nicotine (subcutaneous 1.0, 0.3, 0.1, and 0.03 mg/kg) on choice. If nicotine increases delay discounting, it should have increased preference for the variable delay. Instead, nicotine had very little effect on choice. In a second phase, the titrated delay alternative produced three food pellets instead of one, which was again produced by the variable delay (1 s or 19 s) alternative. Under this procedure, nicotine increased preference for the one pellet alternative. Nicotine‐induced changes in impulsive choice are therefore likely due to differences in reinforcer amount rather than differences in reinforcer delay. In addition, it may be necessary to include an amount sensitivity parameter in any mathematical model of choice when the alternatives differ in reinforcer amount.  相似文献   

6.
Rats pressed keys or levers for water reinforcers delivered by several multiple variable-interval schedules. The programmed rate of reinforcement varied from 15 to 240 reinforcers per hour in different conditions. Responding usually increased and then decreased within experimental sessions. As for food reinforcers, the within-session changes in both lever and key pressing were smaller, peaked later, and were more symmetrical around the middle of the session for lower than for higher rates of reinforcement. When schedules provided high rates of reinforcement, some quantitative differences appeared in the within-session changes for lever and key pressing and for food and water. These results imply that basically similar factors produce within-session changes in responding for lever and key pressing and for food and water. The nature of the reinforcer and the choice of response can also influence the quantitative properties of within-session changes at high rates of reinforcement. Finally, the results show that the application of Herrnstein's (1970) equation to rates of responding averaged over the session requires careful consideration.  相似文献   

7.
Human subjects were exposed to a concurrent-chains schedule in which reinforcer amounts, delays, or both were varied in the terminal links, and consummatory responses were required to receive points that were later exchangeable for money. Two independent variable-interval 30-s schedules were in effect during the initial links, and delay periods were defined by fixed-time schedules. In Experiment 1, subjects were exposed to three different pairs of reinforcer amounts and delays, and sensitivity to reinforcer amount and delay was determined based on the generalized matching law. The relative responding (choice) of most subjects was more sensitive to reinforcer amount than to reinforcer delay. In Experiment 2, subjects chose between immediate smaller reinforcers and delayed larger reinforcers in five conditions with and without timeout periods that followed a shorter delay, in which reinforcer amounts and delays were combined to make different predictions based on local reinforcement density (i.e., points per delay) or overall reinforcement density (i.e., points per total time). In most conditions, subjects' choices were qualitatively in accord with the predictions from the overall reinforcement density calculated by the ratio of reinforcer amount and total time. Therefore, the overall reinforcement density appears to influence the preference of humans in the present self-control choice situation.  相似文献   

8.
9.
Pigeons' discounting of probabilistic and delayed food reinforcers was studied using adjusting-amount procedures. In the probability discounting conditions, pigeons chose between an adjusting number of food pellets contingent on a single key peck and a larger, fixed number of pellets contingent on completion of a variable-ratio schedule. In the delay discounting conditions, pigeons chose between an adjusting number of pellets delivered immediately and a larger, fixed number of pellets delivered after a delay. Probability discounting (i.e., subjective value as a function of the odds against reinforcement) was as well described by a hyperboloid function as delay discounting was (i.e., subjective value as a function of the time until reinforcement). As in humans, the exponents of the hyperboloid function when it was fitted to the probability discounting data were lower than the exponents of the hyperboloid function when it was fitted to the delay discounting data. The subjective values of probabilistic reinforcers were strongly correlated with predictions based on simply substituting the average delay to their receipt in each probabilistic reinforcement condition into the hyperboloid discounting function. However, the subjective values were systematically underestimated using this approach. Using the discounting function proposed by Mazur (1989), which takes into account the variability in the delay to the probabilistic reinforcers, the accuracy with which their subjective values could be predicted was increased. Taken together, the present findings are consistent with Rachlin's (Rachlin, 1990; Rachlin, Logue, Gibbon, & Frankel, 1986) hypothesis that choice involving repeated gambles may be interpreted in terms of the delays to the probabilistic reinforcers.  相似文献   

10.
Three rats earned their daily food ration by responding during individual trials either on a lever that delivered one food pellet immediately or on a second lever that delivered three pellets after a delay that was continuously adjusted to ensure substantial responding to both alternatives. Choice of the delayed reinforcer increased when the number of trials per session was reduced. This result suggests that models seeking closure on choice effects must include a parameter reflecting how preference changes with sessionwide income. Moreover, models positing that reinforcer probability and immediacy (1/delay) function equivalently in choice are called into question by the finding that probability and immediacy produce opposing effects when income level is changed.  相似文献   

11.
Choice between single and multiple delayed reinforcers.   总被引:7,自引:5,他引:2       下载免费PDF全文
Pigeons chose between alternatives that differed in the number of reinforcers and in the delay to each reinforcer. A peck on a red key produced the same consequences on every trial within a condition, but between conditions the number of reinforcers varied from one to three and the reinforcer delays varied between 5 s and 30 s. A peck on a green key produced a delay of adjustable duration and then a single reinforcer. The green-key delay was increased or decreased many times per session, depending on a subject's previous choices, which permitted estimation of an indifference point, or a delay at which a subject chose each alternative about equally often. The indifference points decreased systematically with more red-key reinforcers and with shorter red-key delays. The results did not support the suggestion of Moore (1979) that multiple delayed reinforcers have no effect on preference unless they are closely grouped. The results were well described in quantitative detail by a simple model stating that each of a series of reinforcers increases preference, but that a reinforcer's effect is inversely related to its delay. The success of this model, which considers only delay of reinforcement, suggested that the overall rate of reinforcement for each alternative had no effect on choice between those alternatives.  相似文献   

12.
In a discrete-trials procedure with pigeons, a response on a green key led to a 4-s delay (during which green houselights were lit) and then a reinforcer might or might not be delivered. A response on a red key led to a delay of adjustable duration (during which red houselights were lit) and then a certain reinforcer. The delay was adjusted so as to estimate an indifference point--a duration for which the two alternatives were equally preferred. Once the green key was chosen, a subject had to continue to respond on the green key until a reinforcer was delivered. Each response on the green key, plus the 4-s delay that followed every response, was called one "link" of the green-key schedule. Subjects showed much greater preference for the green key when the number of links before reinforcement was variable (averaging four) than when it was fixed (always exactly four). These findings are consistent with the view that probabilistic reinforcers are analogous to reinforcers delivered after variable delays. When successive links were separated by 4-s or 8-s "interlink intervals" with white houselights, preference for the probabilistic alternative decreased somewhat for 2 subjects but was unaffected for the other 2 subjects. When the interlink intervals had the same green houselights that were present during the 4-s delays, preference for the green key decreased substantially for all subjects. These results provided mixed support for the view that preference for a probabilistic reinforcer is inversely related to the duration of conditioned reinforcers that precede the delivery of food.  相似文献   

13.
Operant responding often changes systematically within experimental sessions. McSweeney, Hinson, and Cannon (1996) argued that sensitization and habituation produce within-session changes in responding. The present study tested two predictions of the sensitization–habituation explanation. In two experiments, rats pressed a lever for reinforcers delivered by a multiple variable interval 15-s variable interval 15-s schedule. In Experiment 1, the variety of reinforcers delivered during the session was manipulated by varying the percentage of programmed reinforcers replaced with qualitatively different reinforcers from 0 to 75%, in five different conditions. In Experiment 2, the intensity of the reinforcer was manipulated by varying the concentration of sucrose in the sucrose and water solution used as the reinforcer from 0 to 30%, in five different conditions. Increasing the variety or the intensity of the reinforcers slowed the within-session decrease in responding. The results are consistent with the predictions of a sensitization–habituation explanation of within-session changes in responding.  相似文献   

14.
Five rats and 4 pigeons responded for food delivered by several concurrent variable-interval schedules. The sum of the rates of reinforcement programmed for the two components varied from 15 to 480 reinforcers per hour in different conditions. Rates of responding usually changed within the experimental session in a similar manner for the two components of each concurrent schedule. The within-session changes were similar to previously reported changes during simple schedules that provided rates of reinforcement equal to the sum of all reinforcers obtained from the concurrent schedules. The number of changeovers also changed within sessions in a manner similar to the changes in instrumental responding. These results suggest that changeovers are governed by the same variables that govern instrumental responding. They also suggest that the within-session change in responding during each component of a concurrent schedule is determined by approximately the sum of the reinforcers obtained from both components when both components provide the same type of reinforcer.  相似文献   

15.
Results of previous research on the effects of noncontingent reinforcement (NCR) have been inconsistent when magnitude of reinforcement was manipulated. We attempted to clarify the influence of NCR magnitude by including additional controls. In Study 1, we examined the effects of reinforcer consumption time by comparing the same magnitude of NCR when session time was and was not corrected to account for reinforcer consumption. Lower response rates were observed when session time was not corrected, indicating that reinforcer consumption can suppress response rates. In Study 2, we first selected varying reinforcer magnitudes (small, medium, and large) on the basis of corrected response rates observed during a contingent reinforcement condition and then compared the effects of these magnitudes during NCR. One participant exhibited lower response rates when large-magnitude reinforcers were delivered; the other ceased responding altogether even when small-magnitude reinforcers were delivered. We also compared the effects of the same NCR magnitude (medium) during 10-min and 30-min sessions. Lower response rates were observed during 30-min sessions, indicating that the number of reinforcers consumed across a session can have the same effect as the number consumed per reinforcer delivery. These findings indicate that, even when response rate is corrected to account for reinforcer consumption, larger magnitudes of NCR (defined on either a per-delivery or per-session basis) result in lower response rates than do smaller magnitudes.  相似文献   

16.
An adjusting‐delay procedure was used to study the choices of pigeons and rats when both delay and amount of reinforcement were varied. In different conditions, the choice alternatives included one versus two reinforcers, one versus three reinforcers, and three versus two reinforcers. The delay to one alternative (the standard alternative) was kept constant in a condition, and the delay to the other (the adjusting alternative) was increased or decreased many times a session so as to estimate an indifference point—a delay at which the two alternatives were chosen about equally often. Indifference functions were constructed by plotting the adjusting delay as a function of the standard delay for each pair of reinforcer amounts. The experiments were designed to test the prediction of a hyperbolic decay equation that the slopes of the indifference functions should increase as the ratio of the two reinforcer amounts increased. Consistent with the hyperbolic equation, the slopes of the indifference functions depended on the ratios of the two reinforcer amounts for both pigeons and rats. These results were not compatible with an exponential decay equation, which predicts slopes of 1 regardless of the reinforcer amounts. Combined with other data, these findings provide further evidence that delay discounting is well described by a hyperbolic equation for both species, but not by an exponential equation. Quantitative differences in the y‐intercepts of the indifference functions from the two species suggested that the rate at which reinforcer strength decreases with increasing delay may be four or five times slower for rats than for pigeons.  相似文献   

17.
In a discrete-trial procedure, pigeons could choose between 2-s and 6-s access to grain by making a single key peck. In Phase 1, the pigeons obtained both reinforcers by responding on fixed-ratio schedules. In Phase 2, they received both reinforcers after simple delays, arranged by fixed-time schedules, during which no responses were required. In Phase 3, the 2-s reinforcer was available through a fixed-time schedule and the 6-s reinforcer was available through a fixed-ratio schedule. In all conditions, the size of the delay or ratio leading to the 6-s reinforcer was systematically increased or decreased several times each session, permitting estimation of an "indifference point," the schedule size at which a subject chose each alternative equally often. By varying the size of the schedule for the 2-s reinforcer across conditions, several such indifference points were obtained from both fixed-time conditions and fixed-ratio conditions. The resulting "indifference curves" from fixed-time conditions and from fixed-ratio conditions were similar in shape, and they suggested that a hyperbolic equation describes the relation between ratio size and reinforcement value as well as the relation between reinforcer delay and its reinforcement value. The results from Phase 3 showed that subjects chose fixed-time schedules over fixed-ratio schedules that generated the same average times between a choice response and reinforcement.  相似文献   

18.
Previous research has demonstrated that factors such as reinforcer frequency, amount, and delay have similar effects on resistance to change and preference. In the present study, 4 boys with autism made choices between a constant reinforcer (one that was the same food item every trial) and a varied food reinforcer (one that varied randomly between three possible food items). For all 4 boys, varied reinforcers were preferred over constant reinforcers, and they maintained higher response rates than constant reinforcers. In addition, when a distraction (a video clip) was introduced, responding maintained by varied reinforcers was more resistant to distraction than responding maintained by constant reinforcers. Thus, the present experiment extended the generality of the relation between preference and resistance to change to variation in reinforcer quality.  相似文献   

19.
Three severely mentally retarded adolescents were studied under discrete-trial procedures in which a choice was arranged between edible reinforcers that differed in magnitude and, in some conditions, delay. In the absence of delays the larger reinforcer was consistently chosen. Under conditions in which the smaller reinforcer was not delayed, increasing the delay to delivery of the larger reinforcer decreased the percentage of trials in which that reinforcer was chosen. All subjects directed the majority of choice responses to the smaller reinforcer when the larger reinforcer was sufficiently delayed, although the value at which this occurred differed across subjects. Under conditions in which the larger reinforcer initially was sufficiently delayed to result in preference for the smaller one, progressively increasing in 5-s increments the delay to both reinforcers increased percentage of trials with the larger reinforcer chosen. At sufficiently long delays, 2 of the subjects consistently chose the larger, but more delayed, reinforcer, and the 3rd subject chose that reinforcer on half of the trials. These results are consistent with the findings of prior studies in which adult humans responded to terminate noise and pigeons responded to produce food.  相似文献   

20.
Previous experiments have shown that unsignaled delayed reinforcement decreases response rates and resistance to change. However, the effects of different delays to reinforcement on underlying response structure have not been investigated in conjunction with tests of resistance to change. In the present experiment, pigeons responded on a three-component multiple variable-interval schedule for food presented immediately, following brief (0.5 s), or following long (3 s) unsignaled delays of reinforcement. Baseline response rates were lowest in the component with the longest delay; they were about equal with immediate and briefly delayed reinforcers. Resistance to disruption by presession feeding, response-independent food during the intercomponent interval, and extinction was slightly but consistently lower as delays increased. Because log survivor functions of interresponse times (IRTs) deviated from simple modes of bout initiations and within-bout responding, an IRT-cutoff method was used to examine underlying response structure. These analyses suggested that baseline rates of initiating bouts of responding decreased as scheduled delays increased, and within-bout response rates tended to be lower in the component with immediate reinforcers. The number of responses per bout was not reliably affected by reinforcer delay, but tended to be highest with brief delays when total response rates were higher in that component. Consistent with previous findings, resistance to change of overall response rate was highly correlated with resistance to change of bout-initiation rates but not with within-bout responding. These results suggest that unsignaled delays to reinforcement affect resistance to change through changes in the probability of initiating a response bout rather than through changes in the underlying response structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号