首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigated the duration of lever pressing by rats when the delivery of appetitive reinforcers was contingent upon response duration. In the first experiment, response durations increased when duration requirements were imposed, and they decreased when duration requirements were removed. This effect occurred whether reinforcers were immediate or delayed by 8 s. In order to maintain the integrity of the delay intervals, reinforcer delivery was dependent upon both lever depression and release. In a second experiment, lever depression only and a response duration of at least 4 s were required for reinforcer delivery. Compared to immediate reinforcement conditions, delayed reinforcers increased both variability and the length of the maximum response durations. In a third experiment, immediate reinforcers were delivered contingent upon lever depression and release under a variety of duration requirements. Median lever‐press durations tracked the contingencies rapidly. Across all three experiments, rats emitted numerous response durations that were too short to satisfy the reinforcer requirements, and bimodal distributions similar to those produced by differential reinforcement of low rate schedules were evident for most rats. In many aspects, response duration responds to reinforcement parameters in a fashion similar to rate of discrete responding, but an examination of this continuous dimension of behavior may provide additional information about environment–behavior relationships.  相似文献   

2.
3.
Recent studies have demonstrated that the expectation of reward delivery has an inverse relationship with operant behavioral variation (e.g., Stahlman, Roberts, & Blaisdell, 2010). Research thus far has largely focused on one aspect of reinforcement – the likelihood of food delivery. In two experiments with pigeons, we examined the effect of two other aspects of reinforcement: the magnitude of the reward and the temporal delay between the operant response and outcome delivery. In the first experiment, we found that a large reward magnitude resulted in reduced spatiotemporal variation in pigeons’ pecking behavior. In the second experiment, we found that a 4-s delay between response-dependent trial termination and reward delivery increased variation in behavior. These results indicate that multiple dimensions of the reinforcer modulate operant response variation.  相似文献   

4.
Key pecking of three pigeons was maintained in separate components of a multiple schedule by either immediate reinforcement (i.e., tandem variable-time fixed-interval schedule) or unsignalled delayed reinforcement (i.e., tandem variable-interval fixed-time schedule). The relative rate of food delivery was equal across components, and this absolute rate differed across conditions. Immediate reinforcement always generated higher response rates than did unsignalled delayed reinforcement. Then, variable-time schedules of food delivery replaced the contingencies just described such that food was delivered at the same rate but independently of responding. In most cases, response rates decreased to near-zero levels. In addition, response persistence was not systematically different between multiple-schedule components across pigeons. The implications of the results for the concepts of response strength and the response-reinforcer relation are noted.  相似文献   

5.
Two experiments studied responding in the rat when the first bar press after a variable period of time produced a cue light that remained on for either 10, 30, or 100 sec and terminated with the delivery of food. In Experiment I, response rate decreased and time to the first response after reinforcement increased as the delay of reinforcement increased. Similar results were obtained whether the delay consisted of retracting the lever during the delay, a fixed delay with no scheduled consequence for responding, or every response during the delay restarted the delay interval. In Experiment II, fixed-delay and fixed-interval schedules of the same duration during the delay period had no differential effect on either response rate or time to the first response after reinforcement, but differentially controlled responding during the delay periods.  相似文献   

6.
Three experiments were conducted with rats in which responses on one lever (labeled the functional lever) produced reinforcers after an unsignaled delay period that reset with each response during the delay. Responses on a second, nonfunctional, lever did not initiate delays, but, in the first and third experiments, such responses during the last 10 s of a delay did postpone food delivery another 10 s. In the first experiment, the location of the two levers was reversed several times. Responding generally was higher on the functional lever, though the magnitude of the difference diminished with successive reversals. In the second experiment, once a delay was initiated by a response on the functional lever, in different conditions responses on the nonfunctional lever either had no effect or postponed food delivery by 30 s. The latter contingency typically lowered response rates on the nonfunctional lever. In the first two experiments, both the functional and nonfunctional levers were identical except for their location; in the third experiment, initially, a vertically mounted, pole-push lever defined the functional response and a horizontally mounted lever defined the nonfunctional response. Higher response rates occurred on the functional lever. These results taken together suggest that responding generally tracked the response-reinforcer contingency. The results further show how nonfunctional-operanda responses are controlled by a prior history of direct reinforcement of such responses, by the temporal delay between such responses and food delivery, and as simple generalization between the two operanda.  相似文献   

7.
The present study examined the acquisition of lever pressing in rats under three procedures in which food delivery was delayed by 4, 8, and 16 seconds relative to the response. Under the nonresetting delay procedure, food followed the response selected for reinforcement after a specified interval elapsed; responses during this interval had no programmed effect. Under the resetting procedure, the response selected for reinforcement initiated an interval to food delivery that was reset by each subsequent response. Under the stacked delay procedure, every response programmed delivery of food t seconds after its occurrence. Two control groups were studied, one that received food immediately after each lever press and another that never received food. With the exception of the group that did not receive food, responding was established with every procedure at every delay value without autoshaping or shaping. Although responding was established under the resetting delay procedure, response rates were generally not as high as under the other two procedures. These findings support the results of other recent investigations in demonstrating that a response not previously reinforced can be brought to strength by delayed reinforcement in the absence of explicit training.  相似文献   

8.
Participants chose between reinforcement schedules differing in delay of reinforcement (interval between a choice response and onset of a video game) and/or amount of reinforcement (duration of access to the game). Experiment 1 showed that immediate reinforcement was preferred to delayed reinforcement with amount of reinforcement held constant, and a large reinforcer was preferred to a small reinforcer when both were obtainable immediately. Imposing a delay before the large reinforcer produced a preference for the immediate, small reinforcer in 40% of participants. This suggested a limited degree of “impulsivity.” In Experiment 2, unequal delays were extended by equal intervals, the amounts being kept equal. Preference for the shorter delay decreased, an effect that presumably makes possible the “preference reversal” phenomenon in studies of self-control. Overall, the results demonstrate that video game playing can produce useful, systematic data when used as a positive reinforcer for choice behavior in humans.  相似文献   

9.
The present experiment sought to provide unequivocal evidence of instrumental learning under omission training. Hungry rats received free food reinforcement while spontaneously running in a wheel. For an omission group, running postponed or cancelled reinforcers in the presence of a discriminative stimulus (SD) requiring subjects to reduce responding to earn food. Background food presentations were then yoked to reinforcement delivered in the presence of the discriminative stimulus. For a control group, which received the same stimulus presentations, reinforcement delivery was yoked to the experimental group at all times. The procedure allowed both within- and between-subject comparisons between omission and response-independent schedules. The response-reinforcer delay under the omission contingency was adjusted so as to equate reinforcement frequency in the presence and absence of the SD. As the SD was not correlated differentially with reinforcement and the running response did not involve approach or withdrawal to the site of food delivery, the successful discrimination performance observed in this experiment cannot be accounted for by appeal to implicit classical conditioning. Instead, it is suggested that decreased running in the presence of the discriminative stimulus was based on the animals' veridical representation of the negative contingency between the response and reinforcement.  相似文献   

10.
Briefly delayed reinforcement: An interresponse time analysis   总被引:3,自引:3,他引:0       下载免费PDF全文
Key-peck responding of pigeons was compared under VI or DRL schedules arranging immediate reinforcement and briefly (.5 sec) delayed reinforcement. Delays were either signaled by a blackout in the chamber, unsignaled, or unsignaled with an additional requirement that responding not occur during the .5 sec interval immediately preceding reinforcement (response delay). Relative to the immediate reinforcement condition, response rates increased during the unsignaled delay, decreased during the signaled delay, and were inconsistent during the response delay condition. An analysis of interresponse times (IRTs) under the different conditions revealed a substantial increase in the frequency of short (0 to .5 sec) IRTs during the unsignaled condition and generally during the response delay conditions compared to that during the immediate reinforcement baseline. Signaled delays decreased the frequency of short (0 to .5 sec) IRTs relative to the immediate reinforcement condition. The results suggest that brief unsignaled delays and, in many instances, response delays increase the frequency of short IRTs by eliminating constraints on responding.  相似文献   

11.
In two experiments, animals were initially exposed to response-dependent schedules of food before exposure to response-independent reinforcement matched for overall rate and temporal distribution of reinforcers to the preceding condition. In Experiment I, response decrements during the response-independent phase were smaller after delayed reinforcement training than after a comparable immediate reinforcement schedule, for both doves and rats. In Experiment II variable-interval and variable-ratio schedules, both with either immediate or delayed reinforcement, were used with rats. Both the delayed reinforcement schedules produced resistance to subsequent response-independent reinforcement, but response decrements were larger after either of the immediate reinforcement conditions. It was concluded that the critical factor in response maintenance under response-independent reinforcement was the type of response-reinforcer contiguities permitted under the response-dependent schedule rather than perception of response-reinforcer “contingencies”. If the response-dependent schedule was arranged so that behaviours other than a designated operant (key pecking or lever pressing) could be contiguous with food, responding was maintained well under response-independent schedules.  相似文献   

12.
"Turning back the clock" on serial-stimulus sign tracking.   总被引:1,自引:1,他引:0       下载免费PDF全文
Two experiments examined the effects of a negative (setback) response contingency on key pecking engendered by a changing light-intensity stimulus clock (ramp stimulus) signaling fixed-time 30-s food deliveries. The response contingency specified that responses would immediately decrease the light-intensity value, and, because food was delivered only after the highest intensity value was presented, would delay food delivery by 1 s for each response. The first experiment examined the acquisition and maintenance of responding for a group trained with the contingency in effect and for a group trained on a response-independent schedule with the ramp stimulus prior to introduction of the contingency. The first group acquired low rates of key pecking, and, after considerable exposure to the contingency, those rates were reduced to low levels. The rates of responding for the second group were reduced very rapidly (within four to five trials) after introduction of the setback contingency. For both groups, rates of responding increased for all but 1 bird when the contingency was removed. A second experiment compared the separate effects of each part of the response contingency. One group was exposed only to the stimulus setback (stimulus only), and a second group was exposed only to the delay of the reinforcer (delay only). The stimulus-only group's rates of responding were immediately reduced to moderate levels, but for most of the birds, these rates recovered quickly when the contingency was removed. The delay-only groups's rates decreased after several trials, to very low levels, and recovery of responding took several sessions once the contingency was removed. The results suggest that (a) sign-tracking behavior elicited by an added clock stimulus may be reduced rapidly and persistently when a setback contingency is imposed, and (b) the success of the contingency is due both to response-dependent stimulus change and response-dependent alterations in the frequency of food delivery. The operation of the contingency is compared with the effects of secondary reinforcement and punishment procedures.  相似文献   

13.
In Experiment 1, three pigeons' key pecking was maintained under a variable-interval 60-s schedule of food reinforcement. A 1-s unsignaled nonresetting delay to reinforcement was then added. Rates decreased and stabilized at values below those observed under immediate-reinforcement conditions. A brief stimulus change (key lit red for 0.5 s) was then arranged to follow immediately the peck that began the delay. Response rates quickly returned to baseline levels. Subsequently, rates near baseline levels were maintained with briefly signaled delays of 3 and 9 s. When a 27-s briefly signaled delay was instituted, response rates decreased to low levels. In Experiment 2, four pigeons' responding was first maintained under a multiple variable-interval 60-s (green key) variable-interval 60-s (red key) schedule. Response rates in both components fell to low levels when a 3-s unsignaled delay was added. In the first component delays were then briefly signaled in the same manner as Experiment 1, and in the second component they were signaled with a change in key color that remained until food was delivered. Response rates increased to near baseline levels in both components, and remained near baseline when the delays in both components were lengthened to 9 s. When delays were lengthened to 27 s, response rates fell to low levels in the briefly signaled delay component for three of four pigeons while remaining at or near baseline in the completely signaled delay component. In Experiment 3, low response rates under a 9-s unsignaled delay to reinforcement (tandem variable-interval 60 s fixed-time 9 s) increased when the delay was briefly signaled. The role of the brief stimulus as conditioned reinforcement may be a function of its temporal relation to food, and thus may be related to the eliciting function of the stimulus.  相似文献   

14.
Previous experiments have shown that unsignaled delayed reinforcement decreases response rates and resistance to change. However, the effects of different delays to reinforcement on underlying response structure have not been investigated in conjunction with tests of resistance to change. In the present experiment, pigeons responded on a three-component multiple variable-interval schedule for food presented immediately, following brief (0.5 s), or following long (3 s) unsignaled delays of reinforcement. Baseline response rates were lowest in the component with the longest delay; they were about equal with immediate and briefly delayed reinforcers. Resistance to disruption by presession feeding, response-independent food during the intercomponent interval, and extinction was slightly but consistently lower as delays increased. Because log survivor functions of interresponse times (IRTs) deviated from simple modes of bout initiations and within-bout responding, an IRT-cutoff method was used to examine underlying response structure. These analyses suggested that baseline rates of initiating bouts of responding decreased as scheduled delays increased, and within-bout response rates tended to be lower in the component with immediate reinforcers. The number of responses per bout was not reliably affected by reinforcer delay, but tended to be highest with brief delays when total response rates were higher in that component. Consistent with previous findings, resistance to change of overall response rate was highly correlated with resistance to change of bout-initiation rates but not with within-bout responding. These results suggest that unsignaled delays to reinforcement affect resistance to change through changes in the probability of initiating a response bout rather than through changes in the underlying response structure.  相似文献   

15.
In Experiment 1, rats leverpressed for food reinforcement on either a variable ratio (VR) 30 schedule or a variable interval (VI) 15-s schedule. One group in each condition received a signal filling a 500-ms delay of reinforcement. This treatment enhanced rates on the VR schedule, and attenuated rates on the VI schedule, relative to the rate seen in an unsignaled control condition. In Experiment 2 there was no delay of reinforcement and the signal and food were presented simultaneously. Attenuated rates of responding were observed on VI schedules with a range of mean interval values (15 to 300 s). Experiment 3 used a range of VR schedules (10 to 150) with simultaneous presentations of signal and food. A signal-induced enhancement of response rate was found at all VR values. In Experiment 4, a signal elevated response rates on a tandem VI VR schedule, but depressed rates on a tandem VR VI schedule, compared to control conditions receiving unsignaled delayed reinforcement. These results are taken to show that the effect of a signal accompanying reinforcement depends upon the nature of the behavior that is reinforced during exposure to a given schedule.  相似文献   

16.
In Experiment 1, delayed reward generated low response rates relative to immediate reward delivered with the same frequency. Lister rats exposed to delayed reward subsequently responded at a higher rate in extinction if they received nonreinforced exposure to the conditioning context after instrumental training and prior to test, compared with animals that received home cage exposure. In Experiment 2, a signaled delay of reinforcement resulted in higher rates than an unsignaled delay. Nonreinforced exposure to the conditioning context elevated response rate for subjects in the unsignaled condition relative to a home cage group, but had no effect on response rates for subjects that had received the signaled delay. In Experiment 3, following an unsignaled reinforcement delay, groups receiving either no event or signaled food in the context responded faster in extinction than groups receiving no context exposure or unsignaled food.  相似文献   

17.
In a baseline condition, pigeons chose between an alternative that always provided food following a 30-s delay (100% reinforcement) and an alternative that provided food half of the time and blackout half of the time following 30-s delays (50% reinforcement). The different outcomes were signaled by different-colored keylights. On average, each alternative was chosen approximately equally often, replicating the finding of suboptimal choice in probabilistic reinforcement procedures. The efficacy of the delay stimuli (keylights) as conditioned reinforcers was assessed in other conditions by interposing a 5-s gap (keylights darkened) between the choice response and one or more of the delay stimuli. The strength of conditioned reinforcement was measured by the decrease in choice of an alternative when the alternative contained a gap. Preference for the 50% alternative decreased in conditions in which the gap preceded either all delay stimuli, both delay stimuli for the 50% alternative, or the food stimulus for the 50% alternative, but preference was not consistently affected in conditions in which the gap preceded only the 100% delay stimulus or the blackout stimulus for the 50% alternative. These results support the notion that conditioned reinforcement underlies the finding of suboptimal preference in probabilistic reinforcement procedures, and that the signal for food on the 50% reinforcement alternative functions as a stronger conditioned reinforcer than the signal for food on the 100% reinforcement alternative. In addition, the results fail to provide evidence that the signal for blackout functions as a conditioned punisher.  相似文献   

18.
Four experiments examined the effects of delays to reinforcement on key peck sequences of pigeons maintained under multiple schedules of contingencies that produced variable or repetitive behavior. In Experiments 1, 2, and 4, in the repeat component only the sequence right-right-left-left earned food, and in the vary component four-response sequences different from the previous 10 earned food. Experiments 1 and 2 examined the effects of nonresetting and resetting delays to reinforcement, respectively. In Experiment 3, in the repeat component sequences had to be the same as one of the previous three, whereas in the vary component sequences had to be different from each of the previous three for food. Experiment 4 compared postreinforcer delays to prereinforcement delays. With immediate reinforcement sequences occurred at a similar rate in the two components, but were less variable in the repeat component. Delays to reinforcement decreased the rate of sequences similarly in both components, but affected variability differently. Variability increased in the repeat component, but was unaffected in the vary component. These effects occurred regardless of the manner in which the delay to reinforcement was programmed or the contingency used to generate repetitive behavior. Furthermore, the effects were unique to prereinforcement delays.  相似文献   

19.
Three experiments were conducted to examine pigeons' postponement of signaled extinction periods (timeouts) from a schedule of food reinforcement when such responding neither decreased overall timeout frequency nor increased the overall frequency of food reinforcement. A discrete-trial procedure was used in which a response during the first 5 s of a trial postponed an otherwise immediate 60-s timeout to a later part of that same trial but had no effect on whether the timeout occurred. During time-in periods, responses on a second key produced food according to a random-interval 20-s schedule. In Experiment 1, the response-timeout interval was 45 s under postponement conditions and 0 s under extinction conditions (responses were ineffective in postponing timeouts). The percentage of trials with a response was consistently high when the timeout-postponement contingency was in effect and decreased to low levels when it was discontinued under extinction conditions. In Experiment 2, the response-timeout interval was also 45 s but postponement responses increased the duration of the timeout, which varied from 60 s to 105 s across conditions. Postponement responding was maintained, generally at high levels, at all timeout durations, despite sometimes large decreases in the overall frequency of food reinforcement. In Experiment 3, timeout duration was held constant at 60 s while the response-timeout interval was varied systematically across conditions from 0 s to 45 s. Postponement responding was maintained under all conditions in which the response-timeout interval exceeded 0 s (the timeout interval in the absence of a response). In some conditions of Experiment 3, which were designed to control for the immediacy of food reinforcement and food-correlated (time-in) stimuli, responding postponed timeout but the timeout was delayed whether a response occurred or not. Responding was maintained for 2 of 3 subjects, suggesting that behavior was negatively reinforced by timeout postponement rather than positively reinforced by the more immediate presentation of food or food-correlated (time-in) stimuli.  相似文献   

20.
This study examined the use of a progressive‐delay schedule of reinforcement to increase self‐control and decrease disruptive behavior in children with autism. When initially given the choice between an immediate smaller reinforcer and a larger delayed reinforcer, all participants chose the smaller reinforcer. When access to the larger reinforcer required either no activity or engaging in a concurrent task during the delay, all participants demonstrated both self‐control and preference for a response requirement. Disruptive behavior decreased during delays that required a concurrent task compared to sessions without an activity requirement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号