首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Human subjects were exposed to a concurrent-chains schedule in which reinforcer amounts, delays, or both were varied in the terminal links, and consummatory responses were required to receive points that were later exchangeable for money. Two independent variable-interval 30-s schedules were in effect during the initial links, and delay periods were defined by fixed-time schedules. In Experiment 1, subjects were exposed to three different pairs of reinforcer amounts and delays, and sensitivity to reinforcer amount and delay was determined based on the generalized matching law. The relative responding (choice) of most subjects was more sensitive to reinforcer amount than to reinforcer delay. In Experiment 2, subjects chose between immediate smaller reinforcers and delayed larger reinforcers in five conditions with and without timeout periods that followed a shorter delay, in which reinforcer amounts and delays were combined to make different predictions based on local reinforcement density (i.e., points per delay) or overall reinforcement density (i.e., points per total time). In most conditions, subjects' choices were qualitatively in accord with the predictions from the overall reinforcement density calculated by the ratio of reinforcer amount and total time. Therefore, the overall reinforcement density appears to influence the preference of humans in the present self-control choice situation.  相似文献   

2.
Choice between single and multiple delayed reinforcers.   总被引:7,自引:5,他引:2       下载免费PDF全文
Pigeons chose between alternatives that differed in the number of reinforcers and in the delay to each reinforcer. A peck on a red key produced the same consequences on every trial within a condition, but between conditions the number of reinforcers varied from one to three and the reinforcer delays varied between 5 s and 30 s. A peck on a green key produced a delay of adjustable duration and then a single reinforcer. The green-key delay was increased or decreased many times per session, depending on a subject's previous choices, which permitted estimation of an indifference point, or a delay at which a subject chose each alternative about equally often. The indifference points decreased systematically with more red-key reinforcers and with shorter red-key delays. The results did not support the suggestion of Moore (1979) that multiple delayed reinforcers have no effect on preference unless they are closely grouped. The results were well described in quantitative detail by a simple model stating that each of a series of reinforcers increases preference, but that a reinforcer's effect is inversely related to its delay. The success of this model, which considers only delay of reinforcement, suggested that the overall rate of reinforcement for each alternative had no effect on choice between those alternatives.  相似文献   

3.
Participants chose between reinforcement schedules differing in delay and/or duration of noise offset. In Experiment 1 it was found that (1) immediate reinforcement was preferred to delayed reinforcement when amounts (durations) of reinforcement were equal; (2) a relatively large reinforcer was preferred to a smaller one when both reinforcers were obtained immediately; and (3) preference for an immediate, small reinforcer versus a delayed, large reinforcer increased as the delay preceding the large reinforcer increased, a sign of “impulsivity”. In Experiment 2, the schedules differed in amount or delay and equal intervals were added either to the constant parameter or the varied parameter. A shift from virtually exclusive preference to indifference occurred in the latter case but not the former, a result supporting a model of self-control that assumes that the value of a schedule depends on the ratio of amount and delay, and that choice between schedules depends on the ratio of these values.  相似文献   

4.
Token reinforcement, choice, and self-control in pigeons.   总被引:9,自引:9,他引:0       下载免费PDF全文
Pigeons were exposed to self-control procedures that involved illumination of light-emitting diodes (LEDs) as a form of token reinforcement. In a discrete-trials arrangement, subjects chose between one and three LEDs; each LED was exchangeable for 2-s access to food during distinct posttrial exchange periods. In Experiment 1, subjects generally preferred the immediate presentation of a single LED over the delayed presentation of three LEDs, but differences in the delay to the exchange period between the two options prevented a clear assessment of the relative influence of LED delay and exchange-period delay as determinants of choice. In Experiment 2, in which delays to the exchange period from either alternative were equal in most conditions, all subjects preferred the delayed three LEDs more often than in Experiment-1. In Experiment 3, subjects preferred the option that resulted in a greater amount of food more often if the choices also produced LEDs than if they did not. In Experiment 4, preference for the delayed three LEDs was obtained when delays to the exchange period were equal, but reversed in favor of an immediate single LED when the latter choice also resulted in quicker access to exchange periods. The overall pattern of results suggests that (a) delay to the exchange period is a more critical determinant of choice than is delay to token presentation; (b) tokens may function as conditioned reinforcers, although their discriminative properties may be responsible for the self-control that occurs under token reinforcer arrangements; and (c) previously reported differences in the self-control choices of humans and pigeons may have resulted at least in part from the procedural conventions of using token reinforcers with human subjects and food reinforcers with pigeon subjects.  相似文献   

5.
The present experiment examined the choices of human subjects as a function of changeover delay (COD) duration. A self-control paradigm was used; subjects chose between larger, more delayed and smaller, less delayed reinforcers. The COD durations were 1 s, 15 s, and 30 s. The results indicated that at the 1-s COD, the subjects distributed their responses approximately equally between the two response alternatives. However, at the 15-s and 30-s COD durations, the subjects tended to demonstrate virtually exclusive preference for the larger, more delayed reinforcer. Furthermore, increasing the COD duration significantly increased the subjects' sensitivity to variation in reinforcer delay. Increasing the COD duration also increased the subjects' sensitivity to reinforcer amount, but this effect was not significant. The results are qualitatively consistent with an interpretation that the subjects followed a strategy which attempted to maximize overall amount of reinforcement.  相似文献   

6.
Three negative reinforcement experiments employing a key-peck response are described. In Experiment I, pigeons shocked on the average of twice per minute (imposed condition) could produce, by pecking a key, an alternate condition with correlated stimuli. Delayed shocks were added, across sessions, to the alternate condition until pecking stopped. Two of three pigeons continued to peck despite a 100% increase in shock frequency. In Experiment II, pigeons were shocked in the imposed condition four times per minute. The postresponse delay to shock was held constant by delivering, in the alternate condition, the next shock, or the next two, three, or four shocks from the imposed-condition shock schedule. All three subjects continued to peck with no change in delay to the first two postresponse shocks but with a 75% reduction in shock frequency. In Experiment III, a response produced an immediate shock followed by a shock-free period. Three of four subjects continued to respond despite reduced delay to shock. Delay-to-shock or shock-frequency reduction was sufficient to maintain key pecking, but neither was necessary. The conditions that negatively reinforce the pigeon's key peck were similar to conditions that negatively reinforce the rat's bar press.  相似文献   

7.
Participants chose between reinforcement schedules differing in delay of reinforcement (interval between a choice response and onset of a video game) and/or amount of reinforcement (duration of access to the game). Experiment 1 showed that immediate reinforcement was preferred to delayed reinforcement with amount of reinforcement held constant, and a large reinforcer was preferred to a small reinforcer when both were obtainable immediately. Imposing a delay before the large reinforcer produced a preference for the immediate, small reinforcer in 40% of participants. This suggested a limited degree of “impulsivity.” In Experiment 2, unequal delays were extended by equal intervals, the amounts being kept equal. Preference for the shorter delay decreased, an effect that presumably makes possible the “preference reversal” phenomenon in studies of self-control. Overall, the results demonstrate that video game playing can produce useful, systematic data when used as a positive reinforcer for choice behavior in humans.  相似文献   

8.
Self-control is demonstrated when a less desirable immediate outcome is chosen to ensure a substantially better future. In a novel animal analogue of this situation, primary reinforcement was delivered in both the initial and terminal links of a concurrent chain schedule. Rats made initial link choices between equal amounts of ethanol-free or ethanol-containing milk. Choosing the ethanol-free reinforcer resulted in delivery of the larger terminal link reinforcer and was thus analogous to self-control. Self-control decreased as the delay between initial and terminal links increased. The results have implications for human choice situations where decisions are made between two immediately available reinforcement alternatives each associated with a different delayed outcome.  相似文献   

9.
In five E-maze experiments, rats were given a choice between receiving reward and nonreward in a situation where stimuli were correlated with reward outcome (predictable situation) versus one where the stimuli were uncorrelated with reward outcome (unpredictable situation). Preference for the unpredictable situation occurred under the following conditions: (a) small (one 37-mg pellet), immediate rewards; (b) small, delayed (15 s) rewards, if the cues correlated with reward outcome were absent during the delay interval; (c) large (15 pellets), immediate rewards if a difficult discrimination was required; and (d) if the stimulus predicting nonreward was present at the choice point. Preference for the predictable situation was strongest if reinforcement was delayed and large or the stimulus predicting reward was present at the choice point. A weaker preference for the predictable situation occurred if reinforcement was immediate and large and a simple discrimination was required or if reinforcement was large and delayed and the cues that correlated with reward outcome were absent during the delay interval. The results support the predictions of DMOD (Daly modification of the Rescorla-Wagner model), a mathematical model of appetitive learning (Daly & Daly, 1982).  相似文献   

10.
In a baseline condition, pigeons chose between an alternative that always provided food following a 30-s delay (100% reinforcement) and an alternative that provided food half of the time and blackout half of the time following 30-s delays (50% reinforcement). The different outcomes were signaled by different-colored keylights. On average, each alternative was chosen approximately equally often, replicating the finding of suboptimal choice in probabilistic reinforcement procedures. The efficacy of the delay stimuli (keylights) as conditioned reinforcers was assessed in other conditions by interposing a 5-s gap (keylights darkened) between the choice response and one or more of the delay stimuli. The strength of conditioned reinforcement was measured by the decrease in choice of an alternative when the alternative contained a gap. Preference for the 50% alternative decreased in conditions in which the gap preceded either all delay stimuli, both delay stimuli for the 50% alternative, or the food stimulus for the 50% alternative, but preference was not consistently affected in conditions in which the gap preceded only the 100% delay stimulus or the blackout stimulus for the 50% alternative. These results support the notion that conditioned reinforcement underlies the finding of suboptimal preference in probabilistic reinforcement procedures, and that the signal for food on the 50% reinforcement alternative functions as a stronger conditioned reinforcer than the signal for food on the 100% reinforcement alternative. In addition, the results fail to provide evidence that the signal for blackout functions as a conditioned punisher.  相似文献   

11.
Previous experiments have shown that unsignaled delayed reinforcement decreases response rates and resistance to change. However, the effects of different delays to reinforcement on underlying response structure have not been investigated in conjunction with tests of resistance to change. In the present experiment, pigeons responded on a three-component multiple variable-interval schedule for food presented immediately, following brief (0.5 s), or following long (3 s) unsignaled delays of reinforcement. Baseline response rates were lowest in the component with the longest delay; they were about equal with immediate and briefly delayed reinforcers. Resistance to disruption by presession feeding, response-independent food during the intercomponent interval, and extinction was slightly but consistently lower as delays increased. Because log survivor functions of interresponse times (IRTs) deviated from simple modes of bout initiations and within-bout responding, an IRT-cutoff method was used to examine underlying response structure. These analyses suggested that baseline rates of initiating bouts of responding decreased as scheduled delays increased, and within-bout response rates tended to be lower in the component with immediate reinforcers. The number of responses per bout was not reliably affected by reinforcer delay, but tended to be highest with brief delays when total response rates were higher in that component. Consistent with previous findings, resistance to change of overall response rate was highly correlated with resistance to change of bout-initiation rates but not with within-bout responding. These results suggest that unsignaled delays to reinforcement affect resistance to change through changes in the probability of initiating a response bout rather than through changes in the underlying response structure.  相似文献   

12.
The choice between immediate and delayed shock was investigated in three experiments with college students. Some Ss were required to choose between a longer-duration shock immediately and a shorter-duration shock later. Immediate, as opposed to delayed, choices were more frequent when: (a) Ss were required to choose the immediate or the delayed shock in contrast to other procedures in which Ss were required to choose immediate shock or passively wait for automatic shock to occur, (b) the duration of the immediate shock was reduced, (c) the S was given prior experience with shock, and (d) the probability of the immediate shock was reduced. Under some circumstances, shock delay and anxiety increased the frequency of immediate choices.  相似文献   

13.
The present study investigated conditions under which the conditioned reinforcement principles of delay-reduction theory and views based on simple maximization of reinforcement rate make ordinally opposing predictions with respect to foraging-related choice behavior. The use of variable-ratio schedules in the choice phase also represents an extension of delay-reduction theory to schedules that may better mimic the effort involved in searching. Pigeons responded on modified concurrent-chains schedules in which equal variable-ratio schedules led to unequal variable-interval outcomes and unequal reinforcer amounts. All 4 subjects completed a minimum of two replications of conditions for which the predictions of delay-reduction theory and a simple rate-maximizing theory were opposed. Results were consistent with delay reduction's ordinal predictions in 11 of 11 replications of the divergent predictions favoring the smaller, more immediate alternative. The predictions of rate maximization were upheld only when they were consistent with those of delay reduction. Results are discussed in terms of conditioned reinforcement, sensitivity to reductions in delay to food, and possible rules of thumb that may be useful in characterizing foraging.  相似文献   

14.
In a two-key concurrent variable-interval schedule (using pigeons), if the reinforcement frequency for one response is held constant while that for the other is increased, the rate of response on the constant key decreases. The immediate reinforcement for key pecking can usually be conceptualized as the change from a condition in which the key light is on and the food hopper light is off to one in which the key light is off and the hopper light is on. The prechange condition is associated with a delay to food of one-half the average interreinforcement interval in effect during this condition. The postchange condition is associated with a delay to food of about .5 seconds. The programming of additional reinforcement results in a decrease in the delay to food associated with the prechange stimulus condition, and thus a decrease in the value of the improvement that results from the change. This would appear to be analogous to a decrease in the amount of reinforcement, and thus sufficient explanation for the decrease in the rate of the response.  相似文献   

15.
Pigeons chose between two alternatives that differed in the probability of reinforcement and the delay to reinforcement. A peck on the red key always produced a delay of 5 s and then a possible reinforcer. The probability of reinforcement for responding on this key varied from .05 to 1.0 in different conditions. A response on the green key produced a delay of adjustable duration and then a possible reinforcer, with the probability of reinforcement ranging from .25 to 1.0 in different conditions. The green-key delay was increased or decreased many times per session, depending on a subject's previous choices. The purpose of these adjustments was to estimate an indifference point, or a delay that resulted in a subject's choosing each alternative about equally often. In conditions where the probability of reinforcement was five times higher on the green key, the green-key delay averaged about 12 s at the indifference point. In conditions where the probability of reinforcement was twice as high on the green key, the green-key delay at the indifference point was about 8 s with high probabilities and about 6 s with low probabilities. An analysis based on these results and those from studies on delay of reinforcement suggests that pigeons' choices are relatively insensitive to variations in the probability of reinforcement between .2 and 1.0, but quite sensitive to variations in probability between .2 and 0.  相似文献   

16.
Adolescents with attention-deficit/hyperactivity disorder (ADHD) are known to have stronger preferences for smaller immediate rewards over larger delayed rewards in delay discounting tasks than their peers, which has been argued to reflect delay aversion. Here, participants performed a delay discounting task with gains and losses. In this latter condition, participants were asked whether they were willing to wait in order to lose less money. Following the core assumption of the delay aversion model that individuals with ADHD have a general aversion to delay, one would predict adolescents with ADHD to avoid waiting in both conditions. Adolescents (12–17 years) with ADHD (n = 29) and controls (n = 28) made choices between smaller immediate and larger delayed gains, and between larger immediate and smaller delayed losses. All delays (5–25 s) and gains/losses (2–10 cents) were experienced. In addition to an area under the curve approach, a mixed-model analysis was conducted to disentangle the contributions of delay duration and immediate gain/delayed loss amount to choice. The ADHD group chose the immediate option more often than controls in the gain condition, but not in the loss condition. The contribution of delay duration to immediate choices was stronger for the ADHD group than the control group in the gain condition only. In addition, the ADHD group scored higher on self-reported delay aversion, and delay aversion was associated with delay sensitivity in the gain condition, but not in the loss condition. In sum, we found no clear evidence for a general aversion to delay in adolescents with ADHD.  相似文献   

17.
The present research compared choices among students with higher or lower grades for rewards that were devalued by imposing a delay to their receipt (Study 1) or by requiring more work for a larger reward (Study 2). In Study 1, students chose between hypothetical and noncontingent smaller immediate or larger delayed monetary rewards. In Study 2, students chose from among different amounts of real, response-contingent academic rewards (extra credit) that required different amounts of work. The results of both studies were similar: The highest scoring students discounted the value of the delayed money less than did their lower scoring counterparts, and the highest scoring students also chose to do and actually did more extra-credit work than lower scoring students did. Differences in the discounting of devalued rewards might represent a fundamental difference between the highest and lower scoring students.  相似文献   

18.
Pigeons chose between an immediate 2-second reinforcer (access to grain) and a 6-second reinforcer delayed 6 seconds. The four pigeons in the control group were exposed to this condition initially. The four experimental subjects first received a condition where both reinforcers were delayed 6 seconds. The small reinforcer delay was then gradually reduced to zero over more than 11,000 trials. Control subjects almost never chose the large delayed reinforcer. Experimental subjects chose the large delayed reinforcer significantly more often. Two experimental subjects showed preference for the large reinforcer even when the consequences for pecking the two keys were switched. The results indicate that fading procedures can lead to increased “self-control” in pigeons in a choice between a large delayed reinforcer and a small immediate reinforcer.  相似文献   

19.
Three groups of rats ran 108 trials in a straight runway, one trial every 3 days. On the first 44 trials, one group received continuous (and immediate) reinforcement (CRF), a second group 50 per cent partial reinforcement (PRF), and the third group a 50 per cent schedule of partial delay of reinforcement (PDR). All groups received CRF on the next 20 trials, and extinction on the last 44 trials. The PRF and PDR groups extinguished at approximately the same rate, and significantly more slowly than the CRF group.  相似文献   

20.
According to theoretical accounts of behavioral momentum, the Pavlovian stimulus—reinforcer contingency determines resistance to change. To assess this prediction, 8 pigeons were exposed to an unsignaled delay-of-reinforcement schedule (a tandem variable-interval fixed-time schedule), a signaled delay-of-reinforcement schedule (a chain variable-interval fixed-time schedule), and an immediate, zero-delay schedule of reinforcement in a three-component multiple schedule. The unsignaled delay and signaled delay schedules employed equal fixed-time delays, with the only difference being a stimulus change in the signaled delay schedule. Overall rates of reinforcement were equated for the three schedules. The Pavlovian contingency was identical for the unsignaled and immediate schedules, and response—reinforcer contiguity was degraded for the unsignaled schedule. Results from two disruption procedures (prefeeding subjects prior to experimental sessions and adding a variable-time schedule to timeout periods separating baseline components) demonstrated that response—reinforcer contiguity does play a role in determining resistance to change. The results from the extinction manipulation were not as clear. Responding in the unsignaled delay component was consistently less resistant to change than was responding in both the immediate and presignaled segments of the signaled delay components, contrary to the view that Pavlovian contingencies determine resistance to change. Probe tests further supported the resistance-to-change results, indicating consistency between resistance to change and preference, both of which are putative measures of response strength.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号