首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a discrete-trials procedure with pigeons, a response on a green key led to a 4-s delay (during which green houselights were lit) and then a reinforcer might or might not be delivered. A response on a red key led to a delay of adjustable duration (during which red houselights were lit) and then a certain reinforcer. The delay was adjusted so as to estimate an indifference point--a duration for which the two alternatives were equally preferred. Once the green key was chosen, a subject had to continue to respond on the green key until a reinforcer was delivered. Each response on the green key, plus the 4-s delay that followed every response, was called one "link" of the green-key schedule. Subjects showed much greater preference for the green key when the number of links before reinforcement was variable (averaging four) than when it was fixed (always exactly four). These findings are consistent with the view that probabilistic reinforcers are analogous to reinforcers delivered after variable delays. When successive links were separated by 4-s or 8-s "interlink intervals" with white houselights, preference for the probabilistic alternative decreased somewhat for 2 subjects but was unaffected for the other 2 subjects. When the interlink intervals had the same green houselights that were present during the 4-s delays, preference for the green key decreased substantially for all subjects. These results provided mixed support for the view that preference for a probabilistic reinforcer is inversely related to the duration of conditioned reinforcers that precede the delivery of food.  相似文献   

2.
A discrete-trials adjusting-delay procedure was used to investigate the conditions under which pigeons might show a preference for partial reinforcement over 100% reinforcement, an effect reported in a number of previous experiments. A peck on a red key always led to a delay with red houselights and then food. In each condition, the duration of the red-houselight delay was adjusted to estimate an indifference point. In 100% reinforcement conditions, a peck on a green key always led to a delay with green houselights and then food. In partial-reinforcement conditions, a peck on the green key led either to the green houselights and food or to white houselights and no food. In some phases of the experiment, statistically significant preference for partial reinforcement over 100% reinforcement was found, but this effect was observed in only about half of the pigeons. The effect was largely eliminated when variability in the delay stimulus colors was equated for 50% reinforcement conditions and 100% reinforcement conditions. Idiosyncratic preferences for certain colors or for stimulus variability may be at least partially responsible for the effect.  相似文献   

3.
In Experiment 1, pigeons' pecks on a green key led to a 5-s delay with green houselights, and then food was delivered on 20% (or, in other conditions, 50%) of the trials. Pecks on a red key led to an adjusting delay with red houselights, and then food was delivered on every trial. The adjusting delay was used to estimate indifference points: delays at which the two alternatives were chosen about equally often. Varying the presence or absence of green houselights during the delays that preceded possible food deliveries had large effects on choice. In contrast, varying the presence of the green or red houselights in the intertrial intervals had no effects on choice. In Experiment 2, pecks on the green key led to delays of either 5 s or 30 s with green houselights, and then food was delivered on 20% of the trials. Varying the duration of the green houselights on nonreinforced trials had no effect on choice. The results suggest that the green houselights served as a conditioned reinforcer at some times but not at others, depending on whether or not there was a possibility that a primary reinforcer might be delivered. Given this interpretation of what constitutes a conditioned reinforcer, most of the results were consistent with the view that the strength of a conditioned reinforcer is inversely related to its duration.  相似文献   

4.
Two experiments measured pigeons' choices between probabilistic reinforcers and certain but delayed reinforcers. In Experiment 1, a peck on a red key led to a 5-s delay and then a possible reinforcer (with a probability of .2). A peck on a green key led to a certain reinforcer after an adjusting delay. This delay was adjusted over trials so as to estimate an indifference point, or a duration at which the two alternatives were chosen about equally often. In all conditions, red houselights were present during the 5-s delay on reinforced trials with the probabilistic alternative, but the houselight colors on nonreinforced trials differed across conditions. Subjects showed a stronger preference for the probabilistic alternative when the houselights were a different color (white or blue) during the delay on nonreinforced trials than when they were red on both reinforced and nonreinforced trials. These results supported the hypothesis that the value or effectiveness of a probabilistic reinforcer is inversely related to the cumulative time per reinforcer spent in the presence of stimuli associated with the probabilistic alternative. Experiment 2 tested some quantitative versions of this hypothesis by varying the delay for the probabilistic alternative (either 0 s or 2 s) and the probability of reinforcement (from .1 to 1.0). The results were best described by an equation that took into account both the cumulative durations of stimuli associated with the probabilistic reinforcer and the variability in these durations from one reinforcer to the next.  相似文献   

5.
Pigeons chose between 5-s and 15-s delay-of-reinforcement alternatives. The first key peck to satisfy the choice schedule began a delay timer, and food was delivered at the end of the interval. Key pecks during the delay interval were measured, but had no scheduled effect. In Experiment 1, signal conditions and choice schedules were varied across conditions. During unsignaled conditions, no stimulus change signaled the beginning of a delay interval. During differential and nondifferential signal conditions, offset of the choice stimuli and onset of a delay stimulus signaled the beginning of a delay interval. During differential signal conditions, different stimuli were correlated with the 5-s and 15-s delays, whereas the same stimulus appeared during both delay durations during nondifferential signal conditions. Pigeons showed similar, extreme levels of preference for the 5-s delay alternative during unsignaled and differentially signaled conditions. Preference levels were reliably lower with nondifferential signals. Experiment 2 assessed preference with two pairs of unsignaled delays in which the ratio of delays was held constant but the absolute duration was increased fourfold. No effect of absolute duration was found. The results highlight the importance of delayed primary reinforcement effects and challenge models of choice that focus solely on conditioned reinforcement.  相似文献   

6.
In Experiment 1, three pigeons' key pecking was maintained under a variable-interval 60-s schedule of food reinforcement. A 1-s unsignaled nonresetting delay to reinforcement was then added. Rates decreased and stabilized at values below those observed under immediate-reinforcement conditions. A brief stimulus change (key lit red for 0.5 s) was then arranged to follow immediately the peck that began the delay. Response rates quickly returned to baseline levels. Subsequently, rates near baseline levels were maintained with briefly signaled delays of 3 and 9 s. When a 27-s briefly signaled delay was instituted, response rates decreased to low levels. In Experiment 2, four pigeons' responding was first maintained under a multiple variable-interval 60-s (green key) variable-interval 60-s (red key) schedule. Response rates in both components fell to low levels when a 3-s unsignaled delay was added. In the first component delays were then briefly signaled in the same manner as Experiment 1, and in the second component they were signaled with a change in key color that remained until food was delivered. Response rates increased to near baseline levels in both components, and remained near baseline when the delays in both components were lengthened to 9 s. When delays were lengthened to 27 s, response rates fell to low levels in the briefly signaled delay component for three of four pigeons while remaining at or near baseline in the completely signaled delay component. In Experiment 3, low response rates under a 9-s unsignaled delay to reinforcement (tandem variable-interval 60 s fixed-time 9 s) increased when the delay was briefly signaled. The role of the brief stimulus as conditioned reinforcement may be a function of its temporal relation to food, and thus may be related to the eliciting function of the stimulus.  相似文献   

7.
Theories of probabilistic reinforcement.   总被引:9,自引:8,他引:1  
In three experiments, pigeons chose between two alternatives that differed in the probability of reinforcement and the delay to reinforcement. A peck at a red key led to a delay of 5 s and then a possible reinforcer. A peck at a green key led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials so as to estimate an indifference point, or a duration at which the two alternatives were chosen about equally often. In Experiments 1 and 2, the intertrial interval was varied across conditions, and these variations had no systematic effects on choice. In Experiment 3, the stimuli that followed a choice of the red key differed across conditions. In some conditions, a red houselight was presented for 5 s after each choice of the red key. In other conditions, the red houselight was present on reinforced trials but not on nonreinforced trials. Subjects exhibited greater preference for the red key in the latter case. The results were used to evaluate four different theories of probabilistic reinforcement. The results were most consistent with the view that the value or effectiveness of a probabilistic reinforcer is determined by the total time per reinforcer spent in the presence of stimuli associated with the probabilistic alternative. According to this view, probabilistic reinforcers are analogous to reinforcers that are delivered after variable delays.  相似文献   

8.
In a baseline condition, pigeons chose between an alternative that always provided food following a 30-s delay (100% reinforcement) and an alternative that provided food half of the time and blackout half of the time following 30-s delays (50% reinforcement). The different outcomes were signaled by different-colored keylights. On average, each alternative was chosen approximately equally often, replicating the finding of suboptimal choice in probabilistic reinforcement procedures. The efficacy of the delay stimuli (keylights) as conditioned reinforcers was assessed in other conditions by interposing a 5-s gap (keylights darkened) between the choice response and one or more of the delay stimuli. The strength of conditioned reinforcement was measured by the decrease in choice of an alternative when the alternative contained a gap. Preference for the 50% alternative decreased in conditions in which the gap preceded either all delay stimuli, both delay stimuli for the 50% alternative, or the food stimulus for the 50% alternative, but preference was not consistently affected in conditions in which the gap preceded only the 100% delay stimulus or the blackout stimulus for the 50% alternative. These results support the notion that conditioned reinforcement underlies the finding of suboptimal preference in probabilistic reinforcement procedures, and that the signal for food on the 50% reinforcement alternative functions as a stronger conditioned reinforcer than the signal for food on the 100% reinforcement alternative. In addition, the results fail to provide evidence that the signal for blackout functions as a conditioned punisher.  相似文献   

9.
Delay or rate of food delivery as determiners of response rate   总被引:16,自引:16,他引:0       下载免费PDF全文
Pigeons were confronted with two keys: a green food key and a white changeover key. Food became available for a peck to the green key after variable intervals of time (mean = 113 seconds). A single peck on the changeover key changed the color of the food key to red for a fixed period of time during which the timing of the variable-interval schedule in green was suspended and the switching option eliminated and after which the conditions associated with green were reinstated. In Experiment 1 a single food presentation was obtainable during each red-key period after a minimum delay timed from the switch. This delay and the duration of the red-key period were held constant during a condition but varied between conditions (delay = 2.5, 7.5, 15, or 30 seconds; red-period duration = 30, 60, 120, 240, or 480 seconds). In Experiment 2 additional food presentations were scheduled during a 240-second red-key period with the delay to the first food delivery held constant at 30 seconds, and the delays to later food deliveries varied over conditions. Considering the data from both experiments, the rate of switching to red was a decreasing function of the delay to the first food, the delay to the second food, and perhaps the delay to the third food after a switch. There was no clear evidence that the rate of food in the red-key period made an independent contribution. The ordering of response rates among conditions was consistent with the view that each food presentation after a response adds an incremental effect to the rate of the response and that each food presentation's contribution is a decreasing function of its delay timed from the response.  相似文献   

10.
Choice between single and multiple delayed reinforcers.   总被引:7,自引:5,他引:2       下载免费PDF全文
Pigeons chose between alternatives that differed in the number of reinforcers and in the delay to each reinforcer. A peck on a red key produced the same consequences on every trial within a condition, but between conditions the number of reinforcers varied from one to three and the reinforcer delays varied between 5 s and 30 s. A peck on a green key produced a delay of adjustable duration and then a single reinforcer. The green-key delay was increased or decreased many times per session, depending on a subject's previous choices, which permitted estimation of an indifference point, or a delay at which a subject chose each alternative about equally often. The indifference points decreased systematically with more red-key reinforcers and with shorter red-key delays. The results did not support the suggestion of Moore (1979) that multiple delayed reinforcers have no effect on preference unless they are closely grouped. The results were well described in quantitative detail by a simple model stating that each of a series of reinforcers increases preference, but that a reinforcer's effect is inversely related to its delay. The success of this model, which considers only delay of reinforcement, suggested that the overall rate of reinforcement for each alternative had no effect on choice between those alternatives.  相似文献   

11.
Key pecking of 4 pigeons was maintained under a multiple variable-interval 20-s variable-interval 120-s schedule of food reinforcement. When rates of key pecking were stable, a 5-s unsignaled, nonresetting delay to reinforcement separated the first peck after an interval elapsed from reinforcement in both components. Rates of pecking decreased substantially in both components. When rates were stable, the situation was changed such that the peck that began the 5-s delay also changed the color of the keylight for 0.5 s (i.e., the delay was briefly signaled). Rates increased to near-immediate reinforcement levels. In subsequent conditions, delays of 10 and 20 s, still briefly signaled, were tested. Although rates of key pecking during the component with the variable-interval 120-s schedule did not change appreciably across conditions, rates during the variable-interval 20-s component decreased greatly in 1 pigeon at the 10-s delay and decreased in all pigeons at the 20-s delay. In a control condition, the variable-interval 20-s schedule with 20-s delays was changed to a variable-interval 35-s schedule with 5-s delays, thus equating nominal rates of reinforcement. Rates of pecking increased to baseline levels. Rates of pecking, then, depended on the value of the briefly signaled delay relative to the programmed interfood times, rather than on the absolute delay value. These results are discussed in terms of similar findings in the literature on conditioned reinforcement, delayed matching to sample, and classical conditioning.  相似文献   

12.
Pigeons were presented with a concurrent-chains schedule in which terminal-link entries were assigned to two response keys on a percentage basis. The terminal links were fixed delays that sometimes ended with food and sometimes did not. In most conditions, 80% of the terminal links were assigned to one key, but a smaller percentage of the terminal links ended with food for this key, so the number of food reinforcers delivered by the two alternatives was equal. When the same terminal-link stimuli (orange houselights) were used for both alternatives, the pigeons showed a preference for whichever alternative delivered more frequent terminal links. When different terminal-link stimuli (green vs. red houselights) were used for the two alternatives, the pigeons showed a preference for whichever alternative delivered fewer terminal links when terminal-link durations were long, and no systematic preferences when terminal-link durations were short. This pattern of results was consistent with the predictions of Grace's (1994) contextual choice model. Preference for the alternative that delivered more frequent terminal links was usually stronger in the first few sessions of a condition than at the end of a condition, suggesting that the conditioned reinforcing effect of the additional terminal-link presentation was, in part, transitory.  相似文献   

13.
In two experiments the conditioned reinforcing and delayed discriminative stimulus functions of stimuli that signal delays to reinforcement were studied. Pigeons' pecks to a center key produced delayed-matching-to-sample trials according to a variable-interval 60-s (or 30-s in 1 pigeon) schedule (Experiment 1) or a multiple variable-interval 20-s variable-interval 120-s schedule (Experiment 2). The trials consisted of a 2-s illumination of one of two sample key colors followed by delays ranging across phases from 0.1 to 27.0 s followed in turn by the presentation of matching and nonmatching comparison stimuli on the side keys. Pecks to the key color that matched the sample were reinforced with 4-s access to grain. Under some conditions of Experiment 1, pecks to nonmatching comparison stimuli produced a 4-s blackout and the start of the next interval. Under other conditions of Experiment 1 and each condition of Experiment 2, pecks to nonmatching stimuli had no effect and trials ended only when pigeons pecked the other, matching stimulus and received food. The functions relating pretrial response rates to delays differed markedly from those relating matching-to-sample accuracy to delays. Specifically, response rates remained relatively high until the longest delays (15.0 to 27.0 s) were arranged, at which point they fell to low levels. Matching accuracy was high at short delays, but fell to chance at delays between 3.0 and 9.0 s. In Experiment 2, both matching accuracy and response rates remained high over a wider range of delays in the variable-interval 120-s component relative to the variable-interval 20-s component. The difference in matching accuracy between the components was not due to an increased tendency in the variable-interval 20-s component toward proactive interference following short intervals. Thus, under these experimental conditions the conditioned reinforcing and the delayed discriminative functions of the sample stimulus depended on the same variables (delay and variable-interval value), but were nevertheless dissociated.  相似文献   

14.
Two experiments with pigeons examined the relation of the duration of a signal for delay ("delay signal") to rates of key pecking. The first employed a multiple schedule comprised of two components with equal variable-interval 60-s schedules of 27-s delayed food reinforcement. In one component, a short (0.5-s) delay signal, presented immediately following the key peck that began the delay, was increased in duration across phases; in the second component the delay signal initially was equal to the length of the programmed delay (27 s) and was decreased across phases. Response rates prior to delays were an increasing function of delay-signal duration. As the delay signal was decreased in duration, response rates were generally higher than those obtained under identical delay-signal durations as the signal was increased in duration. In Experiment 2 a single variable-interval 60-s schedule of 27-s delayed reinforcement was used. Delay-signal durations were again increased gradually across phases. As in Experiment 1, response rates increased as the delay-signal duration was increased. Following the phase during which the signal lasted the entire delay, shorter delay-signal-duration conditions were introduced abruptly, rather than gradually as in Experiment 1, to determine whether the gradual shortening of the delay signal accounted for the differences observed in response rates under identical delay-signal conditions in Experiment 1. Response rates obtained during the second exposures to the conditions with shorter signals were higher than those observed under identical conditions as the signal duration was increased, as in Experiment 1. In both experiments, rates and patterns of responding during delays varied greatly across subjects and were not systematically related to delay-signal durations. The effects of the delay signal may be related to the signal's role as a discriminative stimulus for adventitiously reinforced intradelay behavior, or the delay signal may have served as a conditioned reinforcer by virtue of the temporal relation between it and presentation of food.  相似文献   

15.
Short-term remembering of discriminative stimuli in pigeons.   总被引:8,自引:8,他引:0       下载免费PDF全文
Pigeons learned to peck the left or right of two white keys depending on whether a red or a green stimulus was displayed on a third key. The opportunity to peck the white keys was then dealyed for zero to six seconds after the red or green (to-be-remembered) stimulus. On half the trials, the feeder operated during the delay to interrupt behavior that might mediate discriminated responding. No events were scheduled on the remaining trials. In a later condition, the pigeons had the opportunity to peck the white keys during the delay. In general, accuracy decreased as delay increased in all conditions, but performance was least accurate following feeder operations and most accurate when pecking was allowed during the delay. The procedures may be analogous to varying the opportunity for rehearsal in studies of human short-term memory.  相似文献   

16.
Six pigeons were trained on a delayed red-green matching-to-sample task that arranged four delays within sessions. Matching responses intermittently produced either 1.5-s access to food or 4.5-s access to food, and nonmatching responses produced either 1.5-s or 4.5-s blackout. Two phases were conducted: a signaled phase in which the reinforcer magnitudes (small and large) were signaled by houselights (positioned either on the left or right of the chamber), and an unsignaled phase in which there was no correlation between reinforcer magnitude and houselight position. In both phases, the relative frequency with which red and green matching responses produced food was varied across five values. Both matching accuracy and the sensitivity of performance to the distribution of reinforcers for matching responses decreased with increasing delays in both phases. In addition, accuracy and reinforcer sensitivity were significantly lower on signaled small-reinforcer trials compared with accuracy and sensitivity values on signaled large-reinforcer trials and on both types of unsignaled trials. These results are discussed in the context of research on both nonhuman animal and human memory.  相似文献   

17.
Three severely mentally retarded adolescents were studied under discrete-trial procedures in which a choice was arranged between edible reinforcers that differed in magnitude and, in some conditions, delay. In the absence of delays the larger reinforcer was consistently chosen. Under conditions in which the smaller reinforcer was not delayed, increasing the delay to delivery of the larger reinforcer decreased the percentage of trials in which that reinforcer was chosen. All subjects directed the majority of choice responses to the smaller reinforcer when the larger reinforcer was sufficiently delayed, although the value at which this occurred differed across subjects. Under conditions in which the larger reinforcer initially was sufficiently delayed to result in preference for the smaller one, progressively increasing in 5-s increments the delay to both reinforcers increased percentage of trials with the larger reinforcer chosen. At sufficiently long delays, 2 of the subjects consistently chose the larger, but more delayed, reinforcer, and the 3rd subject chose that reinforcer on half of the trials. These results are consistent with the findings of prior studies in which adult humans responded to terminate noise and pigeons responded to produce food.  相似文献   

18.
Pigeons chose between two alternatives that differed in the probability of reinforcement and the delay to reinforcement. A peck on the red key always produced a delay of 5 s and then a possible reinforcer. The probability of reinforcement for responding on this key varied from .05 to 1.0 in different conditions. A response on the green key produced a delay of adjustable duration and then a possible reinforcer, with the probability of reinforcement ranging from .25 to 1.0 in different conditions. The green-key delay was increased or decreased many times per session, depending on a subject's previous choices. The purpose of these adjustments was to estimate an indifference point, or a delay that resulted in a subject's choosing each alternative about equally often. In conditions where the probability of reinforcement was five times higher on the green key, the green-key delay averaged about 12 s at the indifference point. In conditions where the probability of reinforcement was twice as high on the green key, the green-key delay at the indifference point was about 8 s with high probabilities and about 6 s with low probabilities. An analysis based on these results and those from studies on delay of reinforcement suggests that pigeons' choices are relatively insensitive to variations in the probability of reinforcement between .2 and 1.0, but quite sensitive to variations in probability between .2 and 0.  相似文献   

19.
Influences of delay and rate of reinforcement on discrete-trial choice   总被引:4,自引:0,他引:4  
An adjusting procedure was used to measure pigeons' preferences among alternatives that differed in the duration of a delay before reinforcement and of an intertrial interval (ITI) after reinforcement. In most conditions, a peck at a red key led to a fixed delay, followed by reinforcement, a fixed ITI, and then the beginning of the next trial. A peck at a green key led to an adjustable delay, reinforcement, and then the next trial began without an ITI. The purpose of the adjusting delay was to estimate an indifference point, or a delay that made a subject approximately indifferent between the two alternatives. As the ITI for the red key increased from 0 s to 60 s, the green-key delay at the indifference point increased systematically but only slightly. The fact that there was some increase showed that pigeons' choices were controlled by more than simply the delay to the next reinforcer. One interpretation of these results is that besides delay of reinforcement, rate of reinforcement also influenced choice. However, an analysis that ignored reinforcement rate, but considered the delays between a choice response and the reinforcers on subsequent trials, was able to account for most of the obtained increases in green-key delays. It was concluded that in this type of discrete-trial situation, rate of reinforcement exerts little control over choice behavior, and perhaps none at all.  相似文献   

20.
Three experiments were conducted to test an interpretation of the response-rate-reducing effects of unsignaled nonresetting delays to reinforcement in pigeons. According to this interpretation, rates of key pecking decrease under these conditions because key pecks alternate with hopper-observing behavior. In Experiment 1, 4 pigeons pecked a food key that raised the hopper provided that pecks on a different variable-interval-schedule key met the requirements of a variable-interval 60-s schedule. The stimuli associated with the availability of the hopper (i.e., houselight and keylight off, food key illuminated, feedback following food-key pecks) were gradually removed across phases while the dependent relation between hopper availability and variable-interval-schedule key pecks was maintained. Rates of pecking the variable-interval-schedule key decreased to low levels and rates of food-key pecks increased when variable-interval-schedule key pecks did not produce hopper-correlated stimuli. In Experiment 2, pigeons initially pecked a single key under a variable-interval 60-s schedule. Then the dependent relation between hopper presentation and key pecks was eliminated by arranging a variable-time 60-s schedule. When rates of pecking had decreased to low levels, conditions were changed so that pecks during the final 5 s of each interval changed the keylight color from green to amber. When pecking produced these hopper-correlated stimuli, pecking occurred at high rates, despite the absence of a peck-food dependency. When peck-produced changes in keylight color were uncorrelated with food, rates of pecking fell to low levels. In Experiment 3, details (obtained delays, interresponse-time distributions, eating times) of the transition from high to low response rates produced by the introduction of a 3-s unsignaled delay were tracked from session to session in 3 pigeons that had been initially trained to peck under a conventional variable-interval 60-s schedule. Decreases in response rates soon after the transition to delayed reinforcement were accompanied by decreases in eating times and alterations in interresponse-time distributions. As response rates decreased and became stable, eating times increased and their variability decreased. These findings support an interpretation of the effects of delayed reinforcement that emphasizes the importance of hopper-observing behavior.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号