首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The generality of the molar view of behavior was extended to the study of choice with rats, showing the usefulness of studying order at various levels of extendedness. Rats' presses on two levers produced food according to concurrent variable-interval variable-interval schedules. Seven different reinforcer ratios were arranged within each session, without cues identifying them, and separated by blackouts. To alternate between levers, rats pressed on a third changeover lever. Choice changed rapidly with changes in component reinforcer ratio, and more presses occurred on the lever with the higher reinforcer rate. With continuing reinforcers, choice shifted progressively in the direction of the reinforced lever, but shifted more slowly with each new reinforcer. Sensitivity to reinforcer ratio, as estimated by the generalized matching law, reached an average of 0.9 and exceeded that documented in previous studies with pigeons. Visits to the more-reinforced lever preceded by a reinforcer from that lever increased in duration, while all visits to the less-reinforced lever decreased in duration. Thus, the rats' performances moved faster toward fix and sample than did pigeons' performances in previous studies. Analysis of the effects of sequences of reinforcer sources indicated that sequences of five to seven reinforcers might have sufficed for studying local effects of reinforcers with rats. This study supports the idea that reinforcer sequences control choice between reinforcers, pulses in preference, and visits following reinforcers.  相似文献   

2.
Reinforcers may increase operant responding via a response-strengthening mechanism whereby the probability of the preceding response increases, or via some discriminative process whereby the response more likely to provide subsequent reinforcement becomes, itself, more likely. We tested these two accounts. Six pigeons responded for food reinforcers in a two-alternative switching-key concurrent schedule. Within a session, equal numbers of reinforcers were arranged for responses to each alternative. Those reinforcers strictly alternated between the two alternatives in half the conditions, and were randomly allocated to the alternatives in half the conditions. We also varied, across conditions, the alternative that became available immediately after a reinforcer. Preference after a single reinforcer always favored the immediately available alternative, regardless of the local probability of a reinforcer on that alternative (0 or 1 in the strictly alternating conditions, .5 in the random conditions). Choice then reflected the local reinforcer probabilities, suggesting some discriminative properties of reinforcement. At a more extended level, successive same-alternative reinforcers from an alternative systematically shifted preference towards that alternative, regardless of which alternative was available immediately after a reinforcer. There was no similar shift when successive reinforcers came from alternating sources. These more temporally extended results may suggest a strengthening function of reinforcement, or an enhanced ability to respond appropriately to "win-stay" contingencies over "win-shift" contingencies.  相似文献   

3.
Choice behavior among two alternatives has been widely researched, but fewer studies have examined the effect of multiple (more than two) alternatives on choice. Two experiments investigated whether changing the overall reinforcer rate affected preference among three and four concurrently scheduled alternatives. Experiment 1 trained six pigeons on concurrent schedules with three alternatives available simultaneously. These alternatives arranged reinforcers in a ratio of 9:3:1 with the configuration counterbalanced across pigeons. The overall rate of reinforcement was varied across conditions. Preference between the pair of keys arranging the 9:3 reinforcer ratio was less extreme than the pair arranging the 3:1 reinforcer ratio regardless of overall reinforcer rate. This difference was attributable to the richer alternative receiving fewer responses per reinforcer than the other alternatives. Experiment 2 trained pigeons on concurrent schedules with four alternatives available simultaneously. These alternatives arranged reinforcers in a ratio of 8:4:2:1, and the overall reinforcer rate was varied. Next, two of the alternatives were put into extinction and the random interval duration was changed from 60 s to 5 s. The ratio of absolute response rates was independent of interval length across all conditions. In both experiments, an analysis of sequences of visits following each reinforcer showed that the pigeons typically made their first response to the richer alternative irrespective of which alternative was just reinforced. Performance on these three‐ and four‐alternative concurrent schedules is not easily extrapolated from corresponding research using two‐alternative concurrent schedules.  相似文献   

4.
Six pigeons were trained in a procedure in which sessions included seven unsignaled components, each offering two pecking keys, and each providing a potentially different reinforcer ratio between the two keys. Across conditions, various combinations of reinforcer ratios and reinforcer-magnitude ratios were used to create unequal reinforcer distributions between the two alternatives when averaged across a session. The results extended previous research using the same basic procedure that had included only reinforcer distributions symmetrical around 1:1. Data analyses suggested that the variables controlling choice operated at a number of levels: First, individual reinforcers had local effects on choice; second, sequences of successive reinforcers obtained at the same alternative (continuations) had cumulative effects; and, third, when these sequences themselves occurred with greater frequency, their effects further cumulated. A reinforcer obtained at the other alternative following a sequence of continuations (a discontinuation) had a large effect and apparently reset choice to levels approximating the sessional reinforcer ratio.  相似文献   

5.
Theories of probabilistic reinforcement.   总被引:9,自引:8,他引:1  
In three experiments, pigeons chose between two alternatives that differed in the probability of reinforcement and the delay to reinforcement. A peck at a red key led to a delay of 5 s and then a possible reinforcer. A peck at a green key led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials so as to estimate an indifference point, or a duration at which the two alternatives were chosen about equally often. In Experiments 1 and 2, the intertrial interval was varied across conditions, and these variations had no systematic effects on choice. In Experiment 3, the stimuli that followed a choice of the red key differed across conditions. In some conditions, a red houselight was presented for 5 s after each choice of the red key. In other conditions, the red houselight was present on reinforced trials but not on nonreinforced trials. Subjects exhibited greater preference for the red key in the latter case. The results were used to evaluate four different theories of probabilistic reinforcement. The results were most consistent with the view that the value or effectiveness of a probabilistic reinforcer is determined by the total time per reinforcer spent in the presence of stimuli associated with the probabilistic alternative. According to this view, probabilistic reinforcers are analogous to reinforcers that are delivered after variable delays.  相似文献   

6.
Five pigeons were trained on concurrent variable-interval schedules in a switching-key procedure. The overall rate of reinforcement was constant in all conditions, and the ratios of reinforcers obtainable on the two alternatives were varied over seven levels. Each condition remained in effect for 65 sessions, and the last 50 sessions of data from each condition were analyzed. The most recently obtained reinforcer had the largest effect on current preference, but each of the eight previously obtained reinforcers had a small measurable effect. These effects were larger when the reinforcer ratio was more extreme. A longer term effect of reinforcement was also evident, which changed as a function of the reinforcer ratio arranged. More local analyses showed regularities at a reinforcer-by-reinforcer level and large transient movements in preference toward the just-reinforced alternative immediately following reinforcers, followed by a return to stable levels that were related to the reinforcer ratio in effect. The present data suggest that the variables that control choice have both short- and long-term effects and that the short-term effects increased when the reinforcer ratios arranged were more extreme.  相似文献   

7.
In concurrent schedules, reinforcers are often followed by a brief period of heightened preference for the just‐productive alternative. Such ‘preference pulses’ may reflect local effects of reinforcers on choice. However, similar pulses may occur after nonreinforced responses, suggesting that pulses after reinforcers are partly unrelated to reinforcer effects. McLean, Grace, Pitts, and Hughes (2014) recommended subtracting preference pulses after responses from preference pulses after reinforcers, to construct residual pulses that represent only reinforcer effects. Thus, a reanalysis of existing choice data is necessary to determine whether changes in choice after reinforcers in previous experiments were actually related to reinforcers. In the present paper, we reanalyzed data from choice experiments in which reinforcers served different functions. We compared local choice, mean visit length, and visit‐length distributions after reinforcers and after nonreinforced responses. Our reanalysis demonstrated the utility of McLean et al.'s preference‐pulse correction for determining the effects of reinforcers on choice. However, visit analyses revealed that residual pulses may not accurately represent reinforcer effects, and reinforcer effects were clearer in visit analyses than in local‐choice analyses. The best way to determine the effects of reinforcers on choice may be to conduct visit analyses in addition to local‐choice analyses.  相似文献   

8.
The contingencies in each alternative of concurrent procedures consist of reinforcement for staying and reinforcement for switching. For the stay contingency, behavior directed at one alternative earns and obtains reinforcers. For the switch contingency, behavior directed at one alternative earns reinforcers but behavior directed at the other alternative obtains them. In Experiment 1, responses on the main lever, in S1, incremented stay and switch schedules and obtained a stay reinforcer when it became available. Responses on the switch lever changed S1 to S2 and obtained switch reinforcers when available. In S2, neither responses on the main lever nor on the switch lever were reinforced, but a switch response changed S2 to S1. Run lengths and visit durations were a function of the ratio of the scheduled probabilities of reinforcement (staying/switching). From run lengths and visit durations, traditional concurrent performance was synthesized, and that synthesized performance was consistent with the generalized matching law. Experiment 2 replicated and extended this analysis to concurrent variable-interval schedules. The synthesized results challenge any theory of matching that requires a comparison among the alternatives.  相似文献   

9.
Choice between single and multiple delayed reinforcers.   总被引:7,自引:5,他引:2       下载免费PDF全文
Pigeons chose between alternatives that differed in the number of reinforcers and in the delay to each reinforcer. A peck on a red key produced the same consequences on every trial within a condition, but between conditions the number of reinforcers varied from one to three and the reinforcer delays varied between 5 s and 30 s. A peck on a green key produced a delay of adjustable duration and then a single reinforcer. The green-key delay was increased or decreased many times per session, depending on a subject's previous choices, which permitted estimation of an indifference point, or a delay at which a subject chose each alternative about equally often. The indifference points decreased systematically with more red-key reinforcers and with shorter red-key delays. The results did not support the suggestion of Moore (1979) that multiple delayed reinforcers have no effect on preference unless they are closely grouped. The results were well described in quantitative detail by a simple model stating that each of a series of reinforcers increases preference, but that a reinforcer's effect is inversely related to its delay. The success of this model, which considers only delay of reinforcement, suggested that the overall rate of reinforcement for each alternative had no effect on choice between those alternatives.  相似文献   

10.
Previous research suggested that allocation of responses on concurrent schedules of wheel‐running reinforcement was less sensitive to schedule differences than typically observed with more conventional reinforcers. To assess this possibility, 16 female Long Evans rats were exposed to concurrent FR FR schedules of reinforcement and the schedule value on one alternative was systematically increased. In one condition, the reinforcer on both alternatives was .1 ml of 7.5% sucrose solution; in the other, it was a 30‐s opportunity to run in a wheel. Results showed that the average ratio at which greater than 90% of responses were allocated to the unchanged alternative was higher with wheel‐running reinforcement. As the ratio requirement was initially increased, responding strongly shifted toward the unchanged alternative with sucrose, but not with wheel running. Instead, responding initially increased on both alternatives, then subsequently shifted toward the unchanged alternative. Furthermore, changeover responses as a percentage of total responses decreased with sucrose, but not wheel‐running reinforcement. Finally, for some animals, responding on the increasing ratio alternative decreased as the ratio requirement increased, but then stopped and did not decline with further increments. The implications of these results for theories of choice are discussed.  相似文献   

11.
Four pigeons were trained on two-key concurrent variable-interval schedules with no changeover delay. In Phase 1, relative reinforcers on the two alternatives were varied over five conditions from .1 to .9. In Phases 2 and 3, we instituted a molar feedback function between relative choice in an interreinforcer interval and the probability of reinforcers on the two keys ending the next interreinforcer interval. The feedback function was linear, and was negatively sloped so that more extreme choice in an interreinforcer interval made it more likely that a reinforcer would be available on the other key at the end of the next interval. The slope of the feedback function was -1 in Phase 2 and -3 in Phase 3. We varied relative reinforcers in each of these phases by changing the intercept of the feedback function. Little effect of the feedback functions was discernible at the local (interreinforcer interval) level, but choice measured at an extended level across sessions was strongly and significantly decreased by increasing the negative slope of the feedback function.  相似文献   

12.
Reporting contingencies of reinforcement in concurrent schedules   总被引:2,自引:2,他引:0       下载免费PDF全文
Five pigeons were trained on concurrent variable-interval schedules in which two intensities of yellow light served as discriminative stimuli in a switching-key procedure. A conditional discrimination involving a simultaneous choice between red and green keys followed every reinforcer obtained from both alternatives. A response to the red side key was occasionally reinforced if the prior reinforcer had been obtained from the bright alternative, and a response to the green side key was occasionally reinforced if the prior reinforcer had been obtained from the dim alternative. Measures of the discriminability between the concurrent-schedule alternatives were obtained by varying the reinforcer ratio for correct red and correct green responses across conditions in two parts. Part 1 arranged equal rates of reinforcement in the concurrent schedule, and Part 2 provided a 9:1 concurrent-schedule reinforcer ratio. Part 3 arranged a 1:9 reinforcer ratio in the conditional discrimination, and the concurrent-schedule reinforcer ratio was varied across conditions. Varying the conditional discrimination reinforcer ratio did not affect response allocation in the concurrent schedule, but varying the concurrent-schedule reinforcer ratio did affect conditional discrimination performance. These effects were incompatible with a contingency-discriminability model of concurrent-schedule performance (Davison & Jenkins, 1985), which implies a constant discriminability parameter that is independent of the obtained reinforcer ratio. However, a more detailed analysis of conditional discrimination performance showed that the discriminability between the concurrent-schedule alternatives decreased with time since changing over to an alternative. This effect, combined with aspects of the temporal distribution of reinforcers obtained in the concurrent schedules, qualitatively predicted the molar results and identified the conditions that operate whenever contingency discriminability remains constant.  相似文献   

13.
We conducted two studies extending basic matching research on self-control and impulsivity to the investigation of choices of students diagnosed as seriously emotionally disturbed. In Study 1 we examined the interaction between unequal rates of reinforcement and equal versus unequal delays to reinforcer access on performance of concurrently available sets of math problems. The results of a reversal design showed that when delays to reinforcer access were the same for both response alternatives, the time allocated to each was approximately proportional to obtained reinforcement. When the delays to reinforcer access differed between the response alternatives, there was a bias toward the response alternative and schedule with the lower delays, suggesting impulsivity (i.e., immediate reinforcer access overrode the effects of rate of reinforcement). In Study 2 we examined the interactive effects of reinforcer rate, quality, and delay. Conditions involving delayed access to the high-quality reinforcers on the rich schedule (with immediate access to low-quality reinforcers earned on the lean schedule) were alternated with immediate access to low-quality reinforcers on the rich schedule (with delayed access to high-quality reinforcers on the lean schedule) using a reversal design. With 1 student, reinforcer quality overrode the effects of both reinforcer rate and delay to reinforcer access. The other student tended to respond exclusively to the alternative associated with immediate access to reinforcers. The studies demonstrate a methodology based on matching theory for determining influential dimensions of reinforcers governing individuals' choices.  相似文献   

14.
Six pigeons were trained in sessions composed of seven components, each arranged with a different concurrent-schedule reinforcer ratio. These components occurred in an irregular order with equal frequency, separated by 10-s blackouts. No signals differentiated the different reinforcer ratios. Conditions lasted 50 sessions, and data were collected from the last 35 sessions. In Part 1, the arranged overall reinforcer rate was 2.22 reinforcers per minute. Over conditions, number of reinforcers per component was varied from 4 to 12. In Part 2, the overall reinforcer rate was six per minute, with both 4 and 12 reinforcers per component. Within components, log response-allocation ratios adjusted rapidly as more reinforcers were delivered in the component, and the slope of the choice relation (sensitivity) leveled off at moderately high levels after only about eight reinforcers. When the carryover from previous components was taken into account, the number of reinforcers in the components appeared to have no systematic effect on the speed at which behavior changed after a component started. Consequently, sensitivity values at each reinforcer delivery were superimposable. However, adjustment to changing reinforcer ratios was faster, and reached greater sensitivity values, when overall reinforcer rate was higher. Within a component, each successive reinforcer from the same alternative ("confirming") had a smaller effect than the one before, but single reinforcers from the other alternative ("disconfirming") always had a large effect. Choice in the prior component carried over into the next component, and its effects could be discerned even after five or six reinforcement and nonreinforcement is suggested.  相似文献   

15.
Five pigeons were trained on pairs of concurrent variable-interval schedules in a switching-key procedure. The arranged overall rate of reinforcement was constant in all conditions, and the reinforcer-magnitude ratios obtained from the two alternatives were varied over five levels. Each condition remained in effect for 65 sessions and the last 50 sessions of data from each condition were analyzed. At a molar level of analysis, preference was described well by a version of the generalized matching law, consistent with previous reports. More local analyses showed that recently obtained reinforcers had small measurable effects on current preference, with the most recently obtained reinforcer having a substantially larger effect. Larger reinforcers resulted in larger and longer preference pulses, and a small preference was maintained for the larger-magnitude alternative even after long inter-reinforcer intervals. These results are consistent with the notion that the variables controlling choice have both short- and long-term effects. Moreover, they suggest that control by reinforcer magnitude is exerted in a manner similar to control by reinforcer frequency. Lower sensitivities when reinforcer magnitude is varied are likely to be due to equal frequencies of different sized preference pulses, whereas higher sensitivities when reinforcer rates are varied might result from changes in the frequencies of different sized preference pulses.  相似文献   

16.
In a discrete-trials procedure with pigeons, a response on a green key led to a 4-s delay (during which green houselights were lit) and then a reinforcer might or might not be delivered. A response on a red key led to a delay of adjustable duration (during which red houselights were lit) and then a certain reinforcer. The delay was adjusted so as to estimate an indifference point--a duration for which the two alternatives were equally preferred. Once the green key was chosen, a subject had to continue to respond on the green key until a reinforcer was delivered. Each response on the green key, plus the 4-s delay that followed every response, was called one "link" of the green-key schedule. Subjects showed much greater preference for the green key when the number of links before reinforcement was variable (averaging four) than when it was fixed (always exactly four). These findings are consistent with the view that probabilistic reinforcers are analogous to reinforcers delivered after variable delays. When successive links were separated by 4-s or 8-s "interlink intervals" with white houselights, preference for the probabilistic alternative decreased somewhat for 2 subjects but was unaffected for the other 2 subjects. When the interlink intervals had the same green houselights that were present during the 4-s delays, preference for the green key decreased substantially for all subjects. These results provided mixed support for the view that preference for a probabilistic reinforcer is inversely related to the duration of conditioned reinforcers that precede the delivery of food.  相似文献   

17.
Discrete-trial choice in pigeons: Effects of reinforcer magnitude   总被引:5,自引:5,他引:0       下载免费PDF全文
The preference of pigeons for large reinforcers which occasionally followed a response versus small reinforcers which invariably followed a response was studied in a discrete-trial situation. Two differently colored keys were associated with the two reinforcement alternatives, and preference was measured as the proportion of choice trials on which the key associated with uncertain reinforcement was pecked. A combination of choice and guidance trials insured that received distributions of reinforcement equalled the scheduled distributions. For five of six subjects, preference for the uncertain reinforcer appeared to be a linear function of the magnitude of the certain reinforcer. In addition, there was greater preference for the response alternative associated with uncertain reinforcement than would be expected on the basis of net reinforcer value.  相似文献   

18.
Six pigeons were trained to respond on two keys, each of which provided reinforcers on an arithmetic variable-interval schedule. These concurrent schedules ran nonindependently with a 2-s changeover delay. Six sets of conditions were conducted. Within each set of conditions the ratio of reinforcers available on the two alternatives was varied, but the arranged overall reinforcer rate remained constant. Each set of conditions used a different overall reinforcer rate, ranging from 0.22 reinforcers per minute to 10 reinforcers per minute. The generalized matching law fit the data from each set of conditions, but sensitivity to reinforcer frequency (a) decreased as the overall reinforcer rate decreased for both time allocation and response allocation based analyses of the data. Overall response rates did not vary with changes in relative reinforcer rate, but decreased with decreases in overall reinforcer rate. Changeover rates varied as a function of both relative and overall reinforcer rates. However, as explanations based on changeover rate seem unable to deal with the changes in generalized matching sensitivity, discrimination accounts of choice may offer a more promising interpretation.  相似文献   

19.
Parallel experiments with rats and pigeons examined reasons for previous findings that in choices with probabilistic delayed reinforcers, rats' choices were affected by the time between trials whereas pigeons' choices were not. In both experiments, the animals chose between a standard alternative and an adjusting alternative. A choice of the standard alternative led to a short delay (1 s or 3 s), and then food might or might not be delivered. If food was not delivered, there was an "interlink interval," and then the animal was forced to continue to select the standard alternative until food was delivered. A choice of the adjusting alternative always led to food after a delay that was systematically increased and decreased over trials to estimate an indifference point--a delay at which the two alternatives were chosen about equally often. Under these conditions, the indifference points for both rats and pigeons increased as the interlink interval increased from 0 s to 20 s, indicating decreased preference for the probabilistic reinforcer with longer time between trials. The indifference points from both rats and pigeons were well described by the hyperbolic-decay model. In the last phase of each experiment, the animals were not forced to continue selecting the standard alternative if food was not delivered. Under these conditions, rats' choices were affected by the time between trials whereas pigeons' choices were not, replicating results of previous studies. The differences between the behavior of rats and pigeons appears to be the result of procedural details, not a fundamental difference in how these two species make choices with probabilistic delayed reinforcers.  相似文献   

20.
Sensitivity to reinforcer duration in a self-control procedure   总被引:2,自引:2,他引:0  
In a concurrent-chains procedure, pigeons' responses on left and right keys were followed by reinforcers of different durations at different delays following the choice responses. Three pairs of reinforcer delays were arranged in each session, and reinforcer durations were varied over conditions. In Experiment 1 reinforcer delays were unequal, and in Experiment 2 reinforcer delays were equal. In Experiment 1 preference reversal was demonstrated in that an immediate short reinforcer was chosen more frequently than a longer reinforcer delayed 6 s from the choice, whereas the longer reinforcer was chosen more frequently when delays to both reinforcers were lengthened. In both experiments, choice responding was more sensitive to variations in reinforcer duration at overall longer reinforcer delays than at overall shorter reinforcer delays, independently of whether fixed-interval or variable-interval schedules were arranged in the choice phase. We concluded that preference reversal results from a change in sensitivity of choice responding to ratios of reinforcer duration as the delays to both reinforcers are lengthened.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号