首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In three experiments we investigated the effect on the performance of thirsty rats of varying the instrumental contingency between lever pressing and the delivery of a saccharin reinforcer. In Experiment 1, the subjects performed more slowly in a non-contingent condition, in which the momentary probability of reinforcement was unaffected by whether or not the animals pressed, than in a contingent condition in which the reinforcer was never presented except following a lever press. This was true of performance under both random ratio and interval schedules in which the function determining the probability of reinforcement following a lever press remained the same across the contingent and non-contingent conditions. Experiment 2 demonstrated that instrumental performance was less affected when the contingency was degraded by the introduction of free reinforcers if these reinforcers were signalled. In Experiment 3, lever pressing was reinstated to some degree after non-contingent training by giving non-reinforced exposure to the operant chamber in the absence of the lever. These results suggest that free reinforcers depress instrumental behaviour through a performance mechanism engaged by their ability to support conditioning of the contextual cues.  相似文献   

2.
Five pigeons were trained in a concurrent foraging procedure in which reinforcers were occasionally available after fixed times in two discriminated patches. In Part 1 of the experiment, the fixed times summed to 10 s, and were individually varied between 1 and 9 s over five conditions, with the probability of a reinforcer being delivered at the fixed times always .5. In Part 2, both fixed times were 5 s, and the probabilities of food delivery were varied over conditions, always summing to 1.0. In Parts 3 and 4, one fixed time was kept constant (Part 3, 3 s; Part 4, 7 s) while the other fixed time was varied from 1 s to 15 s. Median residence times in both patches increased with increases in the food-arrival times in either patch, but increased considerably more strongly in the patch in which the arrival time was increased. However, when arrival times were very different in the two patches, residence time in the longer arrival-time patch often decreased. Patch residence also increased with increasing probability of reinforcement, but again tended to fall when one probability was much larger than the other. A detailed analysis of residence times showed that these comprised two distributions, one around a shorter mode that remained constant with changes in arrival times, and one around a longer mode that monotonically increased with increasing arrival time. The frequency of shorter residence times appeared to be controlled by the probability of, and arrival time of, reinforcers in the alternative patch. The frequency of longer residence times was controlled directly by the arrival time of reinforcers in a patch, but not by the probability of reinforcers in a patch. The environmental variables that control both staying in a patch and exiting from a patch need to be understood in the study both of timing processes and of foraging.  相似文献   

3.
Resurgence is defined as an increase in the frequency of a previously reinforced target response when an alternative source of reinforcement is suspended. Despite an extensive body of research examining factors that affect resurgence, the effects of alternative‐reinforcer magnitude have not been examined. Thus, the present experiments aimed to fill this gap in the literature. In Experiment 1, rats pressed levers for single‐pellet reinforcers during Phase 1. In Phase 2, target‐lever pressing was extinguished, and alternative‐lever pressing produced either five‐pellet, one‐pellet, or no alternative reinforcement. In Phase 3, alternative reinforcement was suspended to test for resurgence. Five‐pellet alternative reinforcement produced faster elimination and greater resurgence of target‐lever pressing than one‐pellet alternative reinforcement. In Experiment 2, effects of decreasing alternative‐reinforcer magnitude on resurgence were examined. Rats pressed levers and pulled chains for six‐pellet reinforcers during Phases 1 and 2, respectively. In Phase 3, alternative reinforcement was decreased to three pellets for one group, one pellet for a second group, and suspended altogether for a third group. Shifting from six‐pellet to one‐pellet alternative reinforcement produced as much resurgence as suspending alternative reinforcement altogether, while shifting from six pellets to three pellets did not produce resurgence. These results suggest that alternative‐reinforcer magnitude has effects on elimination and resurgence of target behavior that are similar to those of alternative‐reinforcer rate. Thus, both suppression of target behavior during alternative reinforcement and resurgence when conditions of alternative reinforcement are altered may be related to variables that affect the value of the alternative‐reinforcement source.  相似文献   

4.
Rats pressed keys or levers for water reinforcers delivered by several multiple variable-interval schedules. The programmed rate of reinforcement varied from 15 to 240 reinforcers per hour in different conditions. Responding usually increased and then decreased within experimental sessions. As for food reinforcers, the within-session changes in both lever and key pressing were smaller, peaked later, and were more symmetrical around the middle of the session for lower than for higher rates of reinforcement. When schedules provided high rates of reinforcement, some quantitative differences appeared in the within-session changes for lever and key pressing and for food and water. These results imply that basically similar factors produce within-session changes in responding for lever and key pressing and for food and water. The nature of the reinforcer and the choice of response can also influence the quantitative properties of within-session changes at high rates of reinforcement. Finally, the results show that the application of Herrnstein's (1970) equation to rates of responding averaged over the session requires careful consideration.  相似文献   

5.
Animals accumulate reinforcers when they forgo the opportunity to consume available food in favor of acquiring additional food for later consumption. Laboratory research has shown that reinforcer accumulation is facilitated when an interval (either spatial or temporal) separates earning from consuming reinforcers. However, there has been no systematic investigation on the interval separating consuming reinforcers from earning additional reinforcers. This oversight is problematic because this second interval is an integral part of much of the previous research on reinforcer accumulation. The purpose of the current study was to determine the independent contributions of these two temporal intervals on reinforcer accumulation in rats. Each left lever press earned a single food pellet; delivery of the accumulated pellet(s) occurred upon a right lever press. Conditions varied based on the presence of either an intertrial interval (ITI) that separated pellet delivery from the further opportunity to accumulate more pellets, or a delay‐to‐reinforcement that separated the right lever press from the delivery of the accumulated pellet(s). Delay and ITI values of 0, 5, 10 and 20 s were investigated. The delay‐to‐reinforcement conditions produced greater accumulation relative to the ITI conditions, despite accumulation increasing the density of reinforcement more substantially in the ITI conditions. This finding suggests that the temporal separation between reinforcer accumulation and subsequent delivery and consumption was a more critical variable in controlling reinforcer accumulation.  相似文献   

6.
Three pigeons pecked keys for food reinforcers in a laboratory analogue of foraging in patches. Half the patches contained food (were prey patches). In prey patches, pecks to one key occasionally produced a reinforcer, followed by a fixed travel time and then the start of a new patch. Pecks to another key were exit responses, and immediately produced travel time and then a new patch. Travel time was varied from 0.25 to 16 s at each of three session durations: 1, 4, and 23.5 hr. This part of the experiment arranged a closed economy, in that the only source of food was reinforcers obtained in prey patches. In another part, food deprivation was manipulated by varying postsession feeding so as to maintain the subjects' body weights at percentages ranging from 85% to 95% of their ad lib weights, in 1-hr sessions with a travel time of 12 s. This was an open economy. Patch residence time, defined as the time between the start of a patch and an exit response, increased with increasing travel time, and consistently exceeded times predicted by an optimal foraging model, supporting previously published results. However, residence times also increased with increasing session duration and, in longer sessions, consistently exceeded previously reported residence times in comparable open-economy conditions. Residence times were not systematically affected by deprivation levels. In sum, the results show that the long residence times obtained in long closed-economy sessions should probably be attributed to session duration rather than to economy or deprivation. This conclusion is hard to reconcile with previous interpretations of longer-than-optimal residence times but is consistent with, in economic terms, a predicted shift in consumption towards a preferred commodity when income is increased.  相似文献   

7.
We investigated the duration of lever pressing by rats when the delivery of appetitive reinforcers was contingent upon response duration. In the first experiment, response durations increased when duration requirements were imposed, and they decreased when duration requirements were removed. This effect occurred whether reinforcers were immediate or delayed by 8 s. In order to maintain the integrity of the delay intervals, reinforcer delivery was dependent upon both lever depression and release. In a second experiment, lever depression only and a response duration of at least 4 s were required for reinforcer delivery. Compared to immediate reinforcement conditions, delayed reinforcers increased both variability and the length of the maximum response durations. In a third experiment, immediate reinforcers were delivered contingent upon lever depression and release under a variety of duration requirements. Median lever‐press durations tracked the contingencies rapidly. Across all three experiments, rats emitted numerous response durations that were too short to satisfy the reinforcer requirements, and bimodal distributions similar to those produced by differential reinforcement of low rate schedules were evident for most rats. In many aspects, response duration responds to reinforcement parameters in a fashion similar to rate of discrete responding, but an examination of this continuous dimension of behavior may provide additional information about environment–behavior relationships.  相似文献   

8.
Three experiments examined the effect of context conditioning on the acquisition of freeoperant lever pressing by hungry rats when the presentation of the food reinforcer was delayed for 32 sec. The first study replicated the preexposure effect reported by Dickinson, Watt, and Griffiths (1992): Exposure to the contextual cues with the lever withdrawn prior to each instrumental training session enhanced acquisition, an effect that was attenuated by the presentation of non-contingent reinforcement during the preexposure periods. Signalling the non-contingent reinforcers during the preexposure periods with a brief auditory stimulus enhanced acquisition in a second study, suggesting that the non-contingent reinforcement interferes with acquisition through context conditioning. The final study confirmed this conclusion using a within-subject procedure in which pressing different levers was reinforced in two contexts, one of which was also associated with non-contingent reinforcers.  相似文献   

9.
10.
Monkeys initiated a stimulus by pressing on the center of three levers and the stimulus terminated independently of behavior 60, 80, 90, or 100 sec later. Presses on the right lever were reinforced with food following the three briefer durations, and presses on the left lever, following the 100-sec duration. Incorrect responses produced a 10-sec timeout. Probability of presenting the 100-sec duration was manipulated in the range from 0.25 to 0.75, with the probabilities of the briefer durations remaining equal and summing to one minus the probability of the 100-sec duration. Percentage of responses on either side lever was functionally related to both the probability of presenting the 100-sec stimulus and to stimulus duration. An analysis of the data based on the theory of signal detection resulted in operating characteristics that were linear when plotted on normal-normal coordinates. The percentage of responses on either lever approximated the optimal values for maximizing reinforcement probability in each condition of the experiment.  相似文献   

11.
The generality of the molar view of behavior was extended to the study of choice with rats, showing the usefulness of studying order at various levels of extendedness. Rats' presses on two levers produced food according to concurrent variable-interval variable-interval schedules. Seven different reinforcer ratios were arranged within each session, without cues identifying them, and separated by blackouts. To alternate between levers, rats pressed on a third changeover lever. Choice changed rapidly with changes in component reinforcer ratio, and more presses occurred on the lever with the higher reinforcer rate. With continuing reinforcers, choice shifted progressively in the direction of the reinforced lever, but shifted more slowly with each new reinforcer. Sensitivity to reinforcer ratio, as estimated by the generalized matching law, reached an average of 0.9 and exceeded that documented in previous studies with pigeons. Visits to the more-reinforced lever preceded by a reinforcer from that lever increased in duration, while all visits to the less-reinforced lever decreased in duration. Thus, the rats' performances moved faster toward fix and sample than did pigeons' performances in previous studies. Analysis of the effects of sequences of reinforcer sources indicated that sequences of five to seven reinforcers might have sufficed for studying local effects of reinforcers with rats. This study supports the idea that reinforcer sequences control choice between reinforcers, pulses in preference, and visits following reinforcers.  相似文献   

12.
Different doses of intravenous cocaine reinforced the lever pressing of rhesus monkeys under two-lever concurrent or concurrent-chain schedules. Under the concurrent procedure, responding produced drug reinforcers arranged according to independent variable-interval 1-min schedules. Under the concurrent-chain procedure, responding in the variable-interval link led to one of two mutually exclusive, equal-valued, fixed-ratio links; completion of the ratio produced a drug reinforcer. Under both procedures, responding on one lever produced a constant dose of 0.05 or 0.1 mg/kg/injection, while on the other lever, dose was systematically varied within a range of 0.013 to 0.8 mg/kg/injection. Preference, indicated by relative response frequency on the variable-dose lever during the variable-interval link, was always for the larger of the doses. Relative response frequencies on the variable-dose lever roughly matched relative drug intake (mg/kg of drug obtained on variable lever divided by mg/kg of drug obtained on both levers). For many dose comparisons, responding occurred and reinforcers were obtained almost exclusively on the preferred lever. Overall variable-interval rates generally were lower than with other reinforcers, and these low rates, under the experimental conditions, may have occasioned the exclusive preferences.  相似文献   

13.
Two experiments investigated the effects of schedule value and reinforcer duration on responding for the opportunity to run on fixed-interval (FI) schedules in rats. In the first experiment, 8 male Wistar rats were exposed to FI 15-s, 30-s, and 60-s schedules of wheel-running reinforcement. The operant was lever pressing, and the consequence was the opportunity to run for 60 s. In the second experiment, 8 male Long-Evans rats were exposed to reinforcer durations of 15 s, 30 s, and 90 s. The schedule of reinforcement was an FI 60-s schedule. Results showed that postreinforcement pause and wheel-running rates varied systematically with reinforcer duration but not schedule value. Local lever-pressing rates decreased with reinforcer duration. Overall lever-pressing rates decreased with reinforcer duration but increased with schedule value. Although the reinforcer-duration effect is consistent with previous research, the lack a schedule effect appears to be the result of long post-reinforcement pauses following wheel-running reinforcement that render the manipulation of the interval requirement ineffective.  相似文献   

14.
In Experiment 1 with rats, a left lever press led to a 5-s delay and then a possible reinforcer. A right lever press led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials to estimate an indifference point, or a delay at which the two alternatives were chosen about equally often. Indifference points increased as the probability of reinforcement for the left lever decreased. In some conditions with a 20% chance of food, a light above the left lever was lit during the 5-s delay on all trials, but in other conditions, the light was only lit on those trials that ended with food. Unlike previous results with pigeons, the presence or absence of the delay light on no-food trials had no effect on the rats' indifference points. In other conditions, the rats showed less preference for the 20% alternative when the time between trials was longer. In Experiment 2 with rats, fixed-interval schedules were used instead of simple delays, and the presence or absence of the fixed-interval requirement on no-food trials had no effect on the indifference points. In Experiment 3 with rats and Experiment 4 with pigeons, the animals chose between a fixed-ratio 8 schedule that led to food on 33% of the trials and an adjusting-ratio schedule with food on 100% of the trials. Surprisingly, the rats showed less preference for the 33% alternative in conditions in which the ratio requirement was omitted on no-food trials. For the pigeons, the presence or absence of the ratio requirement on no-food trials had little effect. The results suggest that there may be differences between rats and pigeons in how they respond in choice situations involving delayed and probabilistic reinforcers.  相似文献   

15.
The present study investigated the effect of reinforcer duration on running and on responding reinforced by the opportunity to run. Eleven male Wistar rats responded on levers for the opportunity to run in a running wheel. Opportunities to run were programmed to occur on a tandem fixed-ratio 1 variable-interval 30-s reinforcement schedule. Reinforcer duration varied across conditions from 30 to 120 s. As reinforcer duration increased, the rates of running and lever pressing declined, and latency to lever press increased. The increase in latency to respond was consistent with findings that unconditioned inhibitory aftereffects of reinforcement increase with reinforcer magnitude. The decrease in local lever-pressing rates, however, was inconsistent with the view that response strength increases with the duration of the reinforcer. Response rate varied inversely, not directly, with reinforcer duration. Furthermore, within-session data challenge satiation, fatigue, and response deprivation as determinants of the observed changes in running and responding. In sum, the results point to the need for further research with nonappetitive forms of reinforcement.  相似文献   

16.
A series of experiments was designed to explore the cognitive mechanisms involved in optimal foraging models by using the behavioural controls of operant methodology. Rats were trained to press one of two levers to obtain reinforcement on a progressive variable-interval schedule, which modelled food patch depletion; the schedule was reset by pressing the other lever. Thus both duration (residence time in a patch) and rate-related (interval before and after the final reward) measures were obtained. Experiment 1, which manipulated environmental stability and quality, and Experiment 2, which varied travel time between patches, found results that supported the marginal value theorem (Charnov, 1976) and suggested that rats adjust capture rate to the environment average by monitoring the length of the interval between rewards. Experiment 3 modelled the clumping of food items and found that capture rate was now adjusted by adoption of a fixed giving-up time. Finally, Experiments 4a and 4b ruled out a time expectancy hypothesis by manipulating the number of food clumps and the series of inter-reinforcement intervals. Overall the experiments demonstrate the value of modelling foraging strategies in operant apparatus, and suggest that rats adopt rate predictive strategies when deciding to switch patches.  相似文献   

17.
Three groups of rats pressed a lever for milk reinforcers on various simple reinforcement schedules (one schedule per condition). In Group M, each pair of conditions included a mixed-ratio schedule and a fixed-ratio schedule with equal average response:reinforcer ratios. On mixed-ratio schedules, reinforcement occurred with equal probability after a small or a large response requirement was met. In Group R, fixed-ratio and random-ratio schedules were compared in each pair of conditions. For all subjects in these two groups, the frequency distributions of interresponse times of less than one second were very similar on all ratio schedules, exhibiting a peak at about .2 seconds. For comparison, subjects in Group V responded on variable-interval schedules, and few interresponse times as short as .2 seconds were recorded. The results suggest that the rate of continuous responding is the same on all ratio schedules, and what varies among ratio schedules is the frequency, location, and duration of pauses. Preratio pauses were longer on fixed-ratio schedules than on mixed-ratio or random-ratio schedules, but there was more within-ratio pausing on mixed-ratio and random-ratio schedules. Across a single trial, the probability of an interruption in responding decreased on fixed-ratio schedules, was roughly constant on random-ratio schedules, and often increased and then decreased on mixed-ratio schedules. These response patterns provided partial support for Mazur's (1982) theory that the probability of instrumental responding is directly related to the probability of reinforcement and the proximity of reinforcement.  相似文献   

18.
Changes in response rate similar to frustration effects were studied in a two-lever situation. Responding on one lever on a fixed-interval schedule produced access to water for 5 sec and an exteroceptive stimulus. In the presence of this stimulus, responding on another lever on a fixed-interval schedule produced access to water for 5 sec and terminated the stimulus. Occasional omission of a previously scheduled reinforcer after responding on the first lever resulted consistently in increases in rate on the second lever during the immediately succeeding interval. In another procedure, occasional presentation of a previously unscheduled reinforcer after responding on the first lever resulted consistently in decreases in rate on the second lever during the immediately succeeding interval. Changes occurred after the first omissions or presentations and were about the same in magnitude as the procedure continued over several sessions. Typically, an increase or decrease in rate was maintained throughout an entire 100-sec interval. Changes in rate on the second lever of approximately the same magnitude also occurred when rate on the first lever was near-zero under a schedule that differentially reinforced behavior other than lever pressing.  相似文献   

19.
We present a study that links optimal foraging theory (OFT) to behavioral timing. OFT's distinguishing feature is the use of models that compute the most advantageous behavior for a particular foraging problem and compare the optimal solution to empirical data with little reference to psychological processes. The study of behavioral timing, in contrast, emphasizes performance in relation to time, most often without strategic or functional considerations. In three experiments, reinforcer-maximizing behavior and timing performance are identified and related to each other. In all three experiments starlings work in a setting that simulates food patches separated by a flying distance between the two perches. The patches contain a variable and unpredictable number of reinforcers and deplete suddenly without signal. Before depletion, patches deliver food at fixed intervals (FI). Our main dependent variables are the times of occurrence of three behaviors: the “peak” in pecking rate (Peak), the time of the last peck before “giving in” (GIT), and the time for “moving on” to a new patch (MOT). We manipulate travel requirement (Experiment 1), level of deprivation and FI (Experiment 2), and size of reinforcers (Experiment 3). For OFT, Peak should equal the FI in all conditions while GIT and MOT should just exceed it. Behavioral timing and Scalar Expectancy Theory (SET) in particular predict a Peak at around the FI and a longer (unspecified) GIT, and make no prediction for MOT. We found that Peak was close to the FI and GIT was approximately 1.5 times longer, neither being affected by travel, hunger, or reinforcer size manipulations. MOT varied between 1.5 and just over 3 times the FI, was responsive to both travel time and the FI, and did not change when the reinforcer rate was manipulated. These results support the practice of producing models that explicitly separate information available to the subject from strategic use of this information.  相似文献   

20.
Four rats obtained food pellets by lever pressing. A variable-interval reinforcement schedule assigned reinforcers on average every 2 min during one block of 20 sessions and on average every 8 min during another block. Also, at each variable-interval duration, a block of sessions was conducted with a schedule that imposed a variable-ratio 4 response requirement after each variable interval (i.e., a tandem variable-time variable-ratio 4 schedule). The total rate of lever pressing increased as a function of the rate of reinforcement and as a result of imposing the variable-ratio requirement. Analysis of log survivor plots of interresponse times indicated that lever pressing occurred in bouts that were separated by pauses. Increasing the rate of reinforcement increased total response rate by increasing the rate of initiating bouts and, less reliably, by lengthening bouts. Imposing the variable-ratio component increased response rate mainly by lengthening bouts. This pattern of results is similar to that reported previously with key poking as the response. Also, response rates within bouts were relatively insensitive to either variable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号